Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jodi James is active.

Publication


Featured researches published by Jodi James.


international conference on multimedia and expo | 2004

A gesture-driven multimodal interactive dance system

Gang Qian; Feng Guo; Todd Ingalls; Loren Olson; Jodi James; Thanassis Rikakis

In this paper, we report a real-time gesture driven interactive system with multimodal feedback for performing arts, especially dance. The system consists of two major parts., a gesture recognition engine and a multimodal feedback engine. The gesture recognition engine provides real-time recognition of the performers gesture based on the 3D marker coordinates from a marker-based motion capture system. According to the recognition results, the multimodal feedback engine produces associated visual and audio feedback to the performer. This interactive system is simple to implement and robust to errors in 3D marker data. Satisfactory interactive dance performances have been successfully created and presented using the reported system


acm multimedia | 2006

Movement-based interactive dance performance

Jodi James; Todd Ingalls; Gang Qian; Loren Olsen; Daniel Whiteley; Siew Wong; Thanassis Rikakis

Movement-based interactive dance has recently attracted great interest in the performing arts. While utilizing motion capture technology, the goal of this project was to design the necessary real-time motion analysis engine, staging, and communication systems for the completion of a movement-based interactive multimedia dance performance. The movement analysis engine measured the correlation of dance movement between three people wearing similar sets of retro-reflective markers in a motion capture volume. This analysis provided the framework for the creation of an interactive dance piece, Lucidity, which will be described in detail. Staging such a work also presented additional challenges. These challenges and our proposed solutions will be discussed. We conclude with a description of the final work and a summary of our future research objectives.


computer vision and pattern recognition | 2007

Real-time Gesture Recognition with Minimal Training Requirements and On-line Learning

Stjepan Rajko; Gang Qian; Todd Ingalls; Jodi James

In this paper, we introduce the semantic network model (SNM), a generalization of the hidden Markov model (HMM) that uses factorization of state transition probabilities to reduce training requirements, increase the efficiency of gesture recognition and on-line learning, and allow more precision in gesture modeling. We demonstrate the advantages both formally and experimentally, using examples such as full-body multimodal gesture recognition via optical motion capture and a pressure sensitive floor, as well as mouse/pen gesture recognition. Our results show that our algorithm performs much better than the traditional approach in situations where training samples are limited and/or the precision of the gesture model is high.


Advances in Human-computer Interaction | 2009

A dynamic Bayesian approach to computational Laban shape quality analysis

Dilip Swaminathan; Harvey D. Thornburg; Jessica Mumford; Stjepan Rajko; Jodi James; Todd Ingalls; Ellen Campana; Gang Qian; Pavithra Sampath; Bo Peng

Laban movement analysis (LMA) is a systematic framework for describing all forms of human movement and has been widely applied across animation, biomedicine, dance, and kinesiology. LMA (especially Effort/Shape) emphasizes how internal feelings and intentions govern the patterning of movement throughout the whole body. As we argue, a complex understanding of intention via LMA is necessary for human-computer interaction to become embodied in ways that resemble interaction in the physical world. We thus introduce a novel, flexible Bayesian fusion approach for identifying LMA Shape qualities from raw motion capture data in real time. The method uses a dynamic Bayesian network (DBN) to fuse movement features across the body and across time and as we discuss can be readily adapted for low-cost video. It has delivered excellent performance in preliminary studies comprising improvisatory movements. Our approach has been incorporated in Response, a mixed-reality environment where users interact via natural, full-body human movement and enhance their bodily-kinesthetic awareness through immersive sound and light feedback, with applications to kinesiology training, Parkinsons patient rehabilitation, interactive dance, and many other areas.


acm multimedia | 2004

Phrase structure detection in dance

Vidyarani M. Dyaberi; Hari Sundaram; Jodi James; Gang Qian

This paper deals with phrase structure detection in contemporary western dance. Phrases are a sequence of movements that exist at a higher semantic abstraction than gestures. The problem is important since phrasal structure in dance, plays a key role in communicating meaning. We detect two fundamental dance structures - ABA and the Rondo, as they form the basis for more complex movement sequences. There are two key ideas in our work - (a) the use of a topological framework for deterministic structure detection and (b) novel phrasal distance metrics. The topological graph formulation succinctly captures the domain knowledge about the structure. We show how an objective function can be constructed given the topology. The minimization of this function yields the phrasal structure and phrase boundaries. The distance incorporates both movement and hierarchical body structure. The results are excellent with low median error of 7% (ABA) and 15% (Rondo).


multimedia signal processing | 2005

An Autonomous Dance Scoring System Using Marker-based Motion Capture

Huayue Chen; Gang Qian; Jodi James

In this paper, we present a dance scoring system developed using marker-based motion capture. The dance score is outputted in form of Labanotation. Promising results have been obtained using the proposed dance scoring system


computer music modeling and retrieval | 2008

Capturing Expressive and Indicative Qualities of Conducting Gesture: An Application of Temporal Expectancy Models

Dilip Swaminathan; Harvey D. Thornburg; Todd Ingalls; Stjepan Rajko; Jodi James; Ellen Campana; Kathleya Afanador

Many event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through the movement; thus they are of considerable interest in musical control contexts. To this end, we introduce a novel Bayesian framework which we call the temporal expectancy modeland use it to develop an analysis tool for capturing expressiveand indicativequalities of the conducting gesture based on temporal expectancies. The temporal expectancy model is a general dynamic Bayesian network (DBN) that can be used to encode prior knowledge regarding temporal structure to improve event segmentation. The conducting analysis tool infers beat and tempo, which are indicative and articulation which is expressive, as well as temporal expectancies regarding beat (ictusand preparationinstances) from conducting gesture. Experimental results using our analysis framework reveal a very strong correlation in how significantly the preparation expectancy builds up for staccato vs legato articulation, which bolsters the case for temporal expectancy as cognitive model for event anticipation, and as a key factor in the communication of expressive qualities of conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies like video camera arrays.


computer music modeling and retrieval | 2008

On Cross-Modal Perception of Musical Tempo and the Speed of Human Movement

Kathleya Afanador; Ellen Campana; Todd Ingalls; Dilip Swaminathan; Harvey D. Thornburg; Jodi James; Jessica Mumford; Gang Qian; Stjepan Rajko

Studies in crossmodal perception often use very simplified auditory and visual contexts. While these studies have been theoretically valuable, it is sometimes difficult to see how the findings can be ecologically valid or practically valuable. This study hypothesizes that a musical parameter (tempo) may affect the perception of a human movement quality (speed) and finds that although there are clear limitations, this may be a promising first step towards widening both the contexts in which cross-modal effects are studied and the application areas in which the findings can be used.


asilomar conference on signals, systems and computers | 2006

The Computational Extraction of Spatio-Temporal Formal Structures in the Interactive Dance Work `22'

Vidyarani M. Dyaberi; Hari Sundaram; Thanassis Rikakis; Jodi James

In this paper we propose a framework for the computational extraction of spatial and time characteristics of a single choreographic work. Computational frameworks can aid in revealing non-salient compositional structures in modern dance. The computational extraction of such features allows for the creation of interactive works where the movement and the digital feedback (graphics, sound etc) are integrally connected at deep level of structures. It also facilitates a better understanding of the choreographic process. There are two key contributions in this paper: (a) a systematic analysis of the observable and non-salient aspects of solo dance form, (b) computational analysis of spatio-temporal phrasing structures guided by critical understanding of observable form. Our analysis results are excellent indicating the presence of rich, latent spatio-temporal organization in specific semi- improvisatory modern dance works that may provide rich structural material for interactivity.


conference on image and video retrieval | 2006

Estimating the physical effort of human poses

Yinpeng Chen; Hari Sundaram; Jodi James

This paper deals with the problem of estimating the effort required to maintain a static pose by human beings. The problem is important in developing effective pose classification as wells as in developing models of human attention. We estimate the human pose effort using two kinds of body constraints – skeletal constraints and gravitational constraints. The extracted features are combined together using SVM regression to estimate the pose effort. We tested our algorithm on 55 poses with different annotated efforts with excellent results. Our user studies additionally validate our approach.

Collaboration


Dive into the Jodi James's collaboration.

Top Co-Authors

Avatar

Gang Qian

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Todd Ingalls

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Stjepan Rajko

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hari Sundaram

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ellen Campana

Arizona State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge