Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Priyamvada Tripathi is active.

Publication


Featured researches published by Priyamvada Tripathi.


ieee international conference on automatic face gesture recognition | 2004

Automated gesture segmentation from dance sequences

Kanav Kahol; Priyamvada Tripathi; Sethuraman Panchanathan

Complex human motion (e.g. dance) sequences are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one choreographer to another. Dance sequences also exhibit a large vocabulary of gestures. In this paper, we propose an algorithm called hierarchical activity segmentation. This algorithm employs a dynamic hierarchical layered structure to represent human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a naive Bayesian classifier to derive choreographer profiles from empirical data that are used to predict how particular choreographers segment gestures in other motion sequences. When the predictions were tested with a library of 45 3D motion capture sequences (with 185 distinct gestures) created by 5 different choreographers, they were found to be 93.3% accurate.


international conference on image processing | 2003

Gesture segmentation in complex motion sequences

Kanav Kahol; Priyamvada Tripathi; Sethuraman Panchanathan; Thanassis Rikakis

Complex human motion sequences (such as dances) are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one human observer to another. In this paper, we propose an algorithm called hierarchical activity segmentation. This algorithm employs a dynamic hierarchical layered structure to represent the human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a naive Bayesian classifier to derive creator profiles from empirical data. Then those profiles are used to predict how creators will segment gestures in other motion sequences. When the predictions were tested with a library of 3D motion capture sequences, which were segmented by 2 choreographers they were found to be reasonably accurate.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2006

Modeling context in haptic perception, rendering, and visualization

Kanav Kahol; Priyamvada Tripathi; Troy L. McDaniel; Laura Bratton; Sethuraman Panchanathan

Haptic perception refers to the ability of human beings to perceive spatial properties through touch-based sensations. In haptics, contextual clues about material,shape, size, texture, and weight configurations of an object are perceived by individuals leading to recognition of the object and its spatial features. In this paper, we present strategies and algorithms to model context in haptic applications that allow users to haptically explore objects in virtual reality/augmented reality environments. Initial results show significant improvement in accuracy and efficiency of haptic perception in augmented reality environments when compared to conventional approaches that do not model context in haptic rendering.


Attention Perception & Psychophysics | 2009

Haptic concepts in the blind

Donald Homa; Kanav Kahol; Priyamvada Tripathi; Laura Bratton; Sethuraman Panchanathan

We investigated and compared the acquisition of haptic concepts by the blind with the acquisition of haptic concepts by sighted controls. Each subject—blind, sighted but blindfolded, sighted and touching, and sighted only-initially classified eight objects into two categories using a study/test format, followed by a recognition/classification test involving old, new, and prototype forms. Each object varied along the dimensions of shape, size, and texture, with each dimension having five values. The categories were linearly separable in three dimensions, but no single dimension permitted 100% accurate classification. The results revealed that blind subjects learned the categories quickly and comparably with sighted controls. On the classification test, all groups performed equivalently, with the category prototype classified more accurately than the old or new stimuli. The blind subjects differed from the other subjects on the recognition test in two ways: They were least likely to false alarm to novel patterns that belonged to the category but most likely to false alarm to the category prototype, which they falsely called “old” 100% of the time. We discuss these results in terms of current views of categorization.


conference on creating, connecting and collaborating through computing | 2004

Formalizing cognitive and motor strategy of haptic exploratory movements of individuals who are blind

Kanav Kahol; Priyamvada Tripathi; Sethuraman Panchanathan; Morris Goldberg

Perception and action are closely related in the haptic modality. While there have been a few psychological studies exploring the haptic exploration procedures of blind and sighted individuals, the results of these studies have been very abstract and have not proven to be useful in the design of computer applications. In order to develop natural and intuitive haptic interfaces, deeper knowledge of haptic manual exploration strategies is needed. This work details psychophysical experiments performed with individuals, who are blind or visually impaired, to understand their cognitive and motor exploration strategies. The paper further suggests how the exploration strategies discovered by these experiments can be used to design proactive haptic interfaces between humans and machines. Such haptic interfaces have great potential as an assistive technology for individuals who are blind.


advances in multimedia | 2005

Modeling context in haptic perception, rendering and visualization

Kanav Kahol; Priyamvada Tripathi; Troy L. McDaniel; Sethuraman Panchanathan

Haptic perception refers to the human ability to perceive spatial properties through tactile and haptic sensations. Humans have an uncanny ability to analyze objects based only on sparse information from haptic stimuli. Contextual clues about material of an object, its overall shape, size and weight configurations perceived by individuals, lead to recognition of an object and its spatial features. In this paper, we present strategies and algorithms to model context in haptic applications that allow user to explore objects in virtual reality/augmented reality, haptically. Our methodology is based on modeling users cognitive and motor strategy of haptic exploration. Additionally we also model physiological arrangement of tactile sensors in the human hand. These models provide the context to adapt haptic displays to a users style of haptic perception and exploration and the present state of the users exploration. We designed a tactile cueing paradigm to test the validity of the contextual models. Initial results show improvement in accuracy and efficiency of haptic perception when compared to the conventional approaches that do not model context in haptic rendering.


IEEE International Workshop on Haptic Audio Visual Environments and their Applications | 2005

A methodology to establish ground truth for computer vision algorithms to estimate haptic features from visual images

Troy L. McDaniel; Kanav Kahol; Priyamvada Tripathi; David P. Smith; Laura Bratton; Ravi Atreya; Sethuraman Panchanathan

Humans have an uncanny ability to estimate haptic features of an object such as haptic shape, size, texture and material by visual inspection. A significant computer vision problem is that of estimating haptic features from visual images. While explorations have been made in estimation of visual features such as visual texture, work on estimation of haptic features from video is still in its infancy. We present a methodology to establish ground truth for estimation of haptic features from visual images. We assembled a visio-haptic database of 48 objects ranging from nonsense objects to everyday objects. The variation was controlled in objects by systematically varying haptic features such as shape and texture, and the physical and perceptual ground truth of visual and haptic features was documented. This database provides visio-haptic features of objects and can be used to develop algorithms to estimate haptic features from visual images. Finally, a tactile cueing experiment is presented demonstrating how visio-haptic ground truth can be used to assess the accuracy of a system for visio-haptic conversion of image content.


Human technology : an interdisciplinary journal on humans in ICT environments | 2011

Mining Creativity Research to Inform Design Rationale in Open Source Communities

Winslow Burleson; Priyamvada Tripathi

Design rationale can act as a creativity support tool. Recent findings from the field of creativity research present new opportunities that can guide the implementation and evaluation of design rationale’s ability to foster creative processes and outcomes. By encouraging the exploration of failure through use of analogy, design rationale can foster creative transfer and enable progress in new directions. Open source communities offer an opportunity to observe a form of intrinsically motivated ad hoc design rationale, exhibiting formal and informal information transfer links within forums and allowing access to common tools, expertise, and mentorship. A discussion of a spectrum of implementations of design rationale informs strategies to mitigate conflicts and advance inherent synergies between design rationale and creativity.


ambient media and systems | 2008

Implication of multimodality in ambient interfaces

Priyamvada Tripathi; Sethuraman Panchanathan

Ambient interfaces have long held the promise of enhanced and effective human machine interaction. Ambient interfaces can adapt to human activity allowing seamless exchange of information. This goal requires a coordinated development effort that incorporates a thorough understanding of human perceptual system in the design of interfaces. In this way, ambient interfaces can not only supplement the current activities of humans but also expand their functionality to novel approaches in interaction. Since humans interact with their environments through multiple channels, multimodality is an indispensable aspect of ambient interfaces. Multimodality is a broad term that encompasses not only sensory aspects of human- machine interaction but also cognitive interaction that is responsible for a unified perception. Thus, sensory formats are essential in construction of ambient interfaces but do not constitute the complete picture. In this paper, we propose that ambience can only be achieved when multiple modalities are considered in toto. In addition, multimodality cannot be implemented in isolation from the desired tasks and their pertaining contexts. We propose an integrated view of multimodal interfaces that differentiates itself from the previous conception of multimodal human-machine interaction. Multimodal interfaces have become synonymous with voice-and-gesture interaction. We propose that this category of multimodality does not fully exploit or include the human users capability of effectively interacting and communicating with the machine. Both concepts of semantic congruence and syntactic constraints can be dealt with only when we attempt to create interfaces that include the human perceptual system in its design. This can be achieved by a staged evaluation process where in each interface is associated with its joint performance value and accessibility. Besides this, the interface and human must share the same real-world model for effective reference.


international conference on supporting group work | 2009

Creativity support in IT research organization

Priyamvada Tripathi

All domains of human activity and society require creativity. This dissertation applies machine learning and data mining techniques to create a framework for applying emerging Human Centric Computing (HCC) systems for study and creation of creativity support tools. The proposed system collects and analyzes highresolution on-line and physically captured contextual and social data to substantially contribute to new and better understandings of workplace behavior, social and affective experience, and creative activities. Using this high granularity data, dynamic instruments that use real-time sensing and inference algorithms to provide guidance and support on events and processes related to affect and creativity will be developed and evaluated. In the long term, it is expected that this approach will lead to adaptive reflective technologies that stimulate collaborative activity, reduce time pressure and interruption, mitigate detrimental effects of negative affect, and increase individual and team creative activity and outcomes.

Collaboration


Dive into the Priyamvada Tripathi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kanav Kahol

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laura Bratton

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John A. Black

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David P. Smith

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Donald Homa

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge