Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Harish S. Kulkarni is active.

Publication


Featured researches published by Harish S. Kulkarni.


human factors in computing systems | 2017

Toward Everyday Gaze Input: Accuracy and Precision of Eye Tracking and Implications for Design

Anna Maria Feit; Shane F. Williams; Arturo Toledo; Ann Paradiso; Harish S. Kulkarni; Shaun K. Kane; Meredith Ringel Morris

For eye tracking to become a ubiquitous part of our everyday interaction with computers, we first need to understand its limitations outside rigorously controlled labs, and develop robust applications that can be used by a broad range of users and in various environments. Toward this end, we collected eye tracking data from 80 people in a calibration-style task, using two different trackers in two lighting conditions. We found that accuracy and precision can vary between users and targets more than six-fold, and report on differences between lighting, trackers, and screen regions. We show how such data can be used to determine appropriate target sizes and to optimize the parameters of commonly used filters. We conclude with design recommendations and examples how our findings and methodology can inform the design of error-aware adaptive applications.


human factors in computing systems | 2017

Smartphone-Based Gaze Gesture Communication for People with Motor Disabilities

Xiaoyi Zhang; Harish S. Kulkarni; Meredith Ringel Morris

Current eye-tracking input systems for people with ALS or other motor impairments are expensive, not robust under sunlight, and require frequent re-calibration and substantial, relatively immobile setups. Eye-gaze transfer (e-tran) boards, a low-tech alternative, are challenging to master and offer slow communication rates. To mitigate the drawbacks of these two status quo approaches, we created GazeSpeak, an eye gesture communication system that runs on a smartphone, and is designed to be low-cost, robust, portable, and easy-to-learn, with a higher communication bandwidth than an e-tran board. GazeSpeak can interpret eye gestures in real time, decode these gestures into predicted utterances, and facilitate communication, with different user interfaces for speakers and interpreters. Our evaluations demonstrate that GazeSpeak is robust, has good user satisfaction, and provides a speed improvement with respect to an e-tran board; we also identify avenues for further improvement to low-cost, low-effort gaze-based communication technologies.


Archive | 2012

Staged access points

Jonathan Garn; Yee-Shian Lee; April A. Reagan; Harish S. Kulkarni


Archive | 2008

Dynamic logical unit number creation and protection for a transient storage device

David Abzarian; Harish S. Kulkarni; Todd L. Carpenter


Archive | 2010

Capturing and loading operating system states

David Abzarian; Todd L. Carpenter; Harish S. Kulkarni


Archive | 2009

CAPTURING A COMPUTING EXPERIENCE

Todd L. Carpenter; David Abzarian; Seshagiri Panchapagesan; Harish S. Kulkarni


Archive | 2008

Providing a single drive letter user experience and regional based access control with respect to a storage device

David Abzarian; Todd L. Carpenter; Harish S. Kulkarni


Archive | 2008

Device-side inline pattern matching and policy enforcement

David Abzarian; Todd L. Carpenter; Harish S. Kulkarni; Mark Myers; David J. Steeves


human factors in computing systems | 2017

Exploring the Design Space of AAC Awareness Displays

Kiley Sobel; Alexander Fiannaca; Jon Campbell; Harish S. Kulkarni; Ann Paradiso; Edward Cutrell; Meredith Ringel Morris


Archive | 2009

Device Enforced File Level Protection

David Abzarian; Harish S. Kulkarni; Todd L. Carpenter; Cinthya R. Urasaki

Collaboration


Dive into the Harish S. Kulkarni's collaboration.

Researchain Logo
Decentralizing Knowledge