Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vijay Rajanna is active.

Publication


Featured researches published by Vijay Rajanna.


Journal of Medical Systems | 2016

KinoHaptics: An Automated, Wearable, Haptic Assisted, Physio-therapeutic System for Post-surgery Rehabilitation and Self-care

Vijay Rajanna; Patrick Vo; Jerry Barth; Matthew Mjelde; Trevor Grey; Cassandra Oduola; Tracy Hammond

A carefully planned, structured, and supervised physiotherapy program, following a surgery, is crucial for the successful diagnosis of physical injuries. Nearly 50 % of the surgeries fail due to unsupervised, and erroneous physiotherapy. The demand for a physiotherapist for an extended period is expensive to afford, and sometimes inaccessible. Researchers have tried to leverage the advancements in wearable sensors and motion tracking by building affordable, automated, physio-therapeutic systems that direct a physiotherapy session by providing audio-visual feedback on patient’s performance. There are many aspects of automated physiotherapy program which are yet to be addressed by the existing systems: a wide classification of patients’ physiological conditions to be diagnosed, multiple demographics of the patients (blind, deaf, etc.), and the need to pursue patients to adopt the system for an extended period for self-care. In our research, we have tried to address these aspects by building a health behavior change support system called KinoHaptics, for post-surgery rehabilitation. KinoHaptics is an automated, wearable, haptic assisted, physio-therapeutic system that can be used by a wide variety of demographics and for various physiological conditions of the patients. The system provides rich and accurate vibro-haptic feedback that can be felt by the user, irrespective of the physiological limitations. KinoHaptics is built to ensure that no injuries are induced during the rehabilitation period. The persuasive nature of the system allows for personal goal-setting, progress tracking, and most importantly life-style compatibility. The system was evaluated under laboratory conditions, involving 14 users. Results show that KinoHaptics is highly convenient to use, and the vibro-haptic feedback is intuitive, accurate, and has shown to prevent accidental injuries. Also, results show that KinoHaptics is persuasive in nature as it supports behavior change and habit building. The successful acceptance of KinoHaptics, an automated, wearable, haptic assisted, physio-therapeutic system proves the need and future-scope of automated physio-therapeutic systems for self-care and behavior change. It also proves that such systems incorporated with vibro-haptic feedback encourage strong adherence to the physiotherapy program; can have profound impact on the physiotherapy experience resulting in higher acceptance rate.


Proceedings of the Third ACM SIGSPATIAL International Workshop on the Use of GIS in Public Health | 2014

Step up life: a context aware health assistant

Vijay Rajanna; Raniero Lara-Garduno; Dev Jyoti Behera; Karthic Madanagopal; Daniel W. Goldberg; Tracy Hammond

A recent trend in the popular health news is, reporting the dangers of prolonged inactivity in ones daily routine. The claims are wide in variety and aggressive in nature, linking a sedentary lifestyle with obesity and shortened lifespans [25]. Rather than enforcing an individual to perform a physical exercise for a predefined interval of time, we propose a design, implementation, and evaluation of a context aware health assistant system (called Step Up Life) that encourages a user to adopt a healthy life style by performing simple, and contextually suitable physical exercises. Step Up Life is a smart phone application which provides physical activity reminders to the user considering the practical constraints of the user by exploiting the context information like the user location, personal preferences, calendar events, time of the day and the weather [9]. A fully functional implementation of Step Up Life is evaluated by user studies.


conference on computers and accessibility | 2016

Gaze Typing Through Foot-Operated Wearable Device

Vijay Rajanna

Gaze Typing, a gaze-assisted text entry method, allows individuals with motor (arm, spine) impairments to enter text on a computer using a virtual keyboard and their gaze. Though gaze typing is widely accepted, this method is limited by its lower typing speed, higher error rate, and the resulting visual fatigue, since dwell-based key selection is used. In this research, we present a gaze-assisted, wearable-supplemented, foot interaction framework for dwell-free gaze typing. The framework consists of a custom-built virtual keyboard, an eye tracker, and a wearable device attached to the users foot. To enter a character, the user looks at the character and selects it by pressing the pressure pad, attached to the wearable device, with the foot. Results from a preliminary user study involving two participants with motor impairments show that the participants achieved a mean gaze typing speed of 6.23 Words Per Minute (WPM). In addition, the mean value of Key Strokes Per Character (KPSC) was 1.07 (ideal 1.0), and the mean value of Rate of Backspace Activation (RBA) was 0.07 (ideal 0.0). Furthermore, we present our findings from multiple usability studies and design iterations, through which we created appropriate affordances and experience design of our gaze typing system.


Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications | 2018

Gaze typing in virtual reality: impact of keyboard design, selection method, and motion

Vijay Rajanna; John Paulin Hansen

Gaze tracking in virtual reality (VR) allows for hands-free text entry, but it has not yet been explored. We investigate how the keyboard design, selection method, and motion in the field of view may impact typing performance and user experience. We present two studies of people (n = 32) typing with gaze+dwell and gaze+click inputs in VR. In study 1, the typing keyboard was flat and within-view; in study 2, it was larger-than-view but curved. Both studies included a stationary and a dynamic motion conditions in the users field of view. Our findings suggest that 1) gaze typing in VR is viable but constrained, 2) the users perform best (10.15 WPM) when the entire keyboard is within-view; the larger-than-view keyboard (9.15 WPM) induces physical strain due to increased head movements, 3) motion in the field of view impacts the users performance: users perform better while stationary than when in motion, and 4) gaze+click is better than dwell only (fixed at 550 ms) interaction.


international conference on pervasive computing | 2017

Did you remember to brush?: a noninvasive wearable approach to recognizing brushing teeth for elderly care

Josh Cherian; Vijay Rajanna; Daniel W. Goldberg; Tracy Hammond

Failing to brush ones teeth regularly can have surprisingly serious health consequences, from periodontal disease to coronary heart disease to pancreatic cancer. This problem is especially worrying when caring for the elderly and/or individuals with dementia, as they often forget or are unable to perform standard health activities such as brushing their teeth, washing their hands, and taking medication. To ensure that such individuals are correctly looked after they are placed under the supervision of caretakers or family members, simultaneously limiting their independence and placing an immense burden on their family members and caretakers. To address this problem we developed a non-invasive wearable system based on a wrist-mounted accelerometer to accurately identify when a person brushed their teeth. We tested the efficacy of our system with a month-long in-the-wild study and achieved an accuracy of 94% and an F-measure of 0.82.


international conference on wireless mobile communication and healthcare | 2015

Let Me Relax: Toward Automated Sedentary State Recognition and Ubiquitous Mental Wellness Solutions

Vijay Rajanna; Folami Alamudun; Daniel W. Goldberg; Tracy Hammond

Advances in ubiquitous computing technology improve workplace productivity, reduce physical exertion, but ultimately result in a sedentary work style. Sedentary behavior is associated with an increased risk of stress, obesity, and other health complications. Let Me Relax is a fully automated sedentary-state recognition framework using a smartwatch and smartphone, which encourages mental wellness through interventions in the form of simple relaxation techniques. The system was evaluated through a comparative user study of 22 participants split into a test and a control group. An analysis of NASA Task Load Index pre- and post- study survey revealed that test subjects who followed relaxation methods, showed a trend of both increased activity as well as reduced mental stress. Reduced mental stress was found even in those test subjects that had increased inactivity. These results suggest that repeated interventions, driven by an intelligent activity recognition system, is an effective strategy for promoting healthy habits, which reduce stress, anxiety, and other health risks associated with sedentary workplaces.


Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications | 2018

A gaze gesture-based paradigm for situational impairments, accessibility, and rich interactions

Vijay Rajanna; Tracy Hammond

Gaze gesture-based interactions on a computer are promising, but the existing systems are limited by the number of supported gestures, recognition accuracy, need to remember the stroke order, lack of extensibility, and so on. We present a gaze gesture-based interaction framework where a user can design gestures and associate them to appropriate commands like minimize, maximize, scroll, and so on. This allows the user to interact with a wide range of applications using a common set of gestures. Furthermore, our gesture recognition algorithm is independent of the screen size, resolution, and the user can draw the gesture anywhere on the target application. Results from a user study involving seven participants showed that the system recognizes a set of nine gestures with an accuracy of 93% and a F-measure of 0.96. We envision, this framework can be leveraged in developing solutions for situational impairments, accessibility, and also for implementing rich a interaction paradigm.


human factors in computing systems | 2017

A Gaze Gesture-Based User Authentication System to Counter Shoulder-Surfing Attacks

Vijay Rajanna; Seth Polsley; Paul Taele; Tracy Hammond


Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications | 2016

GAWSCHI: gaze-augmented, wearable-supplemented computer-human interaction

Vijay Rajanna; Tracy Hammond


Proceedings of the Workshop on Communication by Gaze Interaction | 2018

A Fitts' law study of click and dwell interaction by gaze, head and mouse with a head-mounted display

John Paulin Hansen; Vijay Rajanna; I. Scott MacKenzie; Per Bækgaard

Collaboration


Dive into the Vijay Rajanna's collaboration.

Researchain Logo
Decentralizing Knowledge