Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sreekar Krishna is active.

Publication


Featured researches published by Sreekar Krishna.


international conference on acoustics, speech, and signal processing | 2005

A methodology for evaluating robustness of face recognition algorithms with respect to variations in pose angle and illumination angle

Greg Little; Sreekar Krishna; John A. Black; Sethuraman Panchanathan

In this paper, we present a methodology for precisely comparing the robustness of face recognition algorithms with respect to changes in pose angle and illumination angle. For this study, we have chosen four widely-used algorithms: two subspace analysis methods (principle component analysis (PCA) and linear discriminant analysis (LDA)) and two probabilistic learning methods (hidden Markov models (HMM) and Bayesian intra-personal classifier (BIC)). We compare the recognition robustness of these algorithms using a novel database (FacePix) that captures face images with a wide range of pose angles and illumination angles. We propose a method for deriving a robustness measure for each of these algorithms, with respect to pose and illumination angle changes. The results of this comparison indicate that the subspace methods perform more robustly than the probabilistic learning methods in the presence of pose and illumination angle changes.


conference on computers and accessibility | 2005

A wearable face recognition system for individuals with visual impairments

Sreekar Krishna; Greg Little; John A. Black; Sethuraman Panchanathan

This paper describes the iCare Interaction Assistant, an assistive device for helping the individuals who are visually impaired during social interactions. The research presented here addresses the problems encountered in implementing real-time face recognition algorithms on a wearable device. Face recognition is the initial step towards building a comprehensive social interaction assistant that will identify and interpret facial expressions, emotions and gestures. Experiments conducted for selecting a face recognition algorithm that works despite changes in facial pose and illumination angle are reported. Performance details of the face recognition algorithms tested on the device are presented along with the overall performance of the system. The specifics of the hardware components used in the wearable device are mentioned and the block diagram of the wearable system is explained in detail.


human factors in computing systems | 2010

VibroGlove: an assistive technology aid for conveying facial expressions

Sreekar Krishna; Shantanu Bala; Troy L. McDaniel; Stephen McGuire; Sethuraman Panchanathan

In this paper, a novel interface is described for enhancing human-human interpersonal interactions. Specifically, the device is targeted as an assistive aid to deliver the facial expressions of an interaction partner to people who are blind or visually impaired. Vibro-tactors, mounted on the back of a glove, provide a means for conveying haptic emoticons that represent the six basic human emotions and the neutral expression of the users interaction partner. The detailed design of the haptic interface and haptic icons of expressions are presented, along with a user study involving a subject who is blind, as well as sighted, blind-folded participants. Results reveal the potential for enriching social communication for people with visual disabilities.


ieee international workshop on haptic audio visual environments and games | 2008

Using a haptic belt to convey non-verbal communication cues during social interactions to individuals who are blind

Troy L. McDaniel; Sreekar Krishna; Vineeth Nallure Balasubramanian; Dirk Colbry; Sethuraman Panchanathan

Good social skills are important and provide for a healthy, successful life; however, individuals with visual impairments are at a disadvantage when interacting with sighted peers due to inaccessible non-verbal cues. This paper presents a haptic (vibrotactile) belt to assist individuals who are blind or visually impaired by communicating non-verbal cues during social interactions. We focus on non-verbal communication pertaining to the relative location of the communicators with respect to the user in terms of direction and distance. Results from two experiments show that the haptic belt is effective in using vibration location and duration to communicate the relative direction and distance, respectively, of an individual in the userpsilas visual field.


EURASIP Journal on Advances in Signal Processing | 2008

Person-independent head pose estimation using biased manifold embedding

Vineeth Nallure Balasubramanian; Sreekar Krishna; Sethuraman Panchanathan

Head pose estimation has been an integral problem in the study of face recognition systems and human-computer interfaces, as part of biometric applications. A fine estimate of the head pose angle is necessary and useful for several face analysis applications. To determine the head pose, face images with varying pose angles can be considered to be lying on a smooth low-dimensional manifold in high-dimensional image feature space. However, when there are face images of multiple individuals with varying pose angles, manifold learning techniques often do not give accurate results. In this work, we propose a framework for a supervised form of manifold learning called Biased Manifold Embedding to obtain improved performance in head pose angle estimation. This framework goes beyond pose estimation, and can be applied to all regression applications. This framework, although formulated for a regression scenario, unifies other supervised approaches to manifold learning that have been proposed so far. Detailed studies of the proposed method are carried out on the FacePix database, which contains 181 face images each of 30 individuals with pose angle variations at a granularity of . Since biometric applications in the real world may not contain this level of granularity in training data, an analysis of the methodology is performed on sparsely sampled data to validate its effectiveness. We obtained up to average pose angle estimation error in the results from our experiments, which matched the best results obtained for head pose estimation using related approaches.


human factors in computing systems | 2009

Using tactile rhythm to convey interpersonal distances to individuals who are blind

Troy L. McDaniel; Sreekar Krishna; Dirk Colbry; Sethuraman Panchanathan

This paper presents a scheme for using tactile rhythms to convey interpersonal distance to individuals who are blind or visually impaired, with the goal of providing access to non-verbal cues during social interactions. A preliminary experiment revealed that subjects could identify the proposed tactile rhythms and found them intuitive for the given application. Future work aims to improve recognition results and increase the number of interpersonal distances conveyed by incorporating temporal change information into the proposed methodology.


ieee international workshop on haptic audio visual environments and games | 2010

MOVeMENT: A framework for systematically mapping vibrotactile stimulations to fundamental body movements

Troy L. McDaniel; Daniel Villanueva; Sreekar Krishna; Sethuraman Panchanathan

Traditional forms of motor learning, i.e., audio-visual instruction and/or guidance through physical contact, are limited depending on the situation such as instruction in a noisy or busy classroom setting, or across a large physical separation. Vibrotactile stimulation has recently emerged as a promising alternative or augmentation to traditional forms of motor training and learning, but has evolved mostly in an adhoc manner where trainers or researchers have resorted to application specific vibrotactile-movement mapping. In contrast, this paper proposes a novel framework, MOVeMENT (Mapping Of Vibrations to moveMENT), that provides systematic design guidelines for mapping vibrotactile stimulations to human body movements in motor skill training applications. We present a pilot test to validate the proposed framework. Results of the study are promising, and the subjects found the movement instructions intuitive and easy to recognize.


international conference on computers helping people with special needs | 2010

Assistive technologies as effective mediators in interpersonal social interactions for persons with visual disability

Sreekar Krishna; Sethuraman Panchanathan

In this paper, we discuss the use of assistive technologies for enriching the social interactions of people who are blind and visually impaired with their sighted counterparts. Specifically, we describe and demonstrate two experiments with the Social Interaction Assistant for, a) providing rehabilitative feedback for reducing stereotypic body mannerisms which are known to impede social interactions, and b) provide an assistive technology for accessing facial expressions of interaction partners. We highlight the importance of these two problems in everyday social interactions of the visually disabled community. We propose novel use of wearable computing technologies (both sensing and actuating technologies) for augmenting sensory deficiencies of the user population, while ensuring that their cognitive faculties are not compromised in any manner. Computer vision, motion sensing and haptic technologies are combined in the proposed platform towards enhancing social interactions of the targeted user population.


human factors in computing systems | 2010

Heartbeats: a methodology to convey interpersonal distance through touch

Troy L. McDaniel; Daniel Villanueva; Sreekar Krishna; Dirk Colbry; Sethuraman Panchanathan

Individuals who are blind are at a disadvantage when interacting with sighted peers given that nearly 65% of interaction cues are non-verbal in nature [3]. Previously, we proposed an assistive device in the form of a vibrotactile belt capable of communicating interpersonal positions (direction and distance between users who are blind and the other participants involved in a social interaction). In this paper, we extend our work through use of novel tactile rhythms to provide access to the non-verbal cue of interpersonal distance, referred to as Proxemics in popular literature. Experimental results reveal that subjects found the proposed approach to be intuitive, and they could accurately recognize the rhythms, and hence, the interpersonal distances.


acm multimedia | 2009

Person localization using a wearable camera towards enhancing social interactions for individuals with visual impairment

Lakshmi Gade; Sreekar Krishna; Sethuraman Panchanathan

Individuals with visual impairments are at a loss when it comes to everyday social interactions as majority (65%) of these interactions happen through visual non-verbal media. Recently,efforts have been made towards development of an assistive technology called the Social Interaction Assistant [14] which enables access to such useful cues so as to compensate for the lack of vision and other visual impairments. There have been studies which enumerate the important needs of such individuals when they interact in social situations. Along with feedback about their own social behavior, these studies indicate that individuals with visual disabilities are interested in a number of cues related to the people in their surroundings. In this paper, we discuss the importance of person localization while building a human-centric assistive technology which addresses the essential needs of the visually impaired users. Next, we describe the challenges that arise when a wearable camera setup is used as an input source in order to perform person localization. Finally, we present a computer vision based algorithm adapted to handle the issues that are inherent when such a wearable camera setup is used and demonstrate its performance on a number of example sequences.

Collaboration


Dive into the Sreekar Krishna's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John A. Black

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dirk Colbry

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Greg Little

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Shantanu Bala

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Colin Juillard

Arizona State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge