Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shrinivas Pundlik is active.

Publication


Featured researches published by Shrinivas Pundlik.


international symposium on visual computing | 2011

Time to collision and collision risk estimation from local scale and motion

Shrinivas Pundlik; Eli Peli; Gang Luo

Computer-vision based collision risk assessment is important in collision detection and obstacle avoidance tasks. We present an approach to determine both time to collision (TTC) and collision risk for semi-rigid obstacles from videos obtained with an uncalibrated camera. TTC for a body moving relative to the camera can be calculated using the ratio of its image size and its time derivative. In order to compute this ratio, we utilize the local scale change and motion information obtained from detection and tracking of feature points, wherein lies the chief novelty of our approach. Using the same local scale change and motion information, we also propose a measure of collision risk for obstacles moving along different trajectories relative to the camera optical axis. Using videos of pedestrians captured in a controlled experimental setup, in which ground truth can be established, we demonstrate the accuracy of our TTC and collision risk estimation approach for different walking trajectories.


computer vision and pattern recognition | 2013

Collision Detection for Visually Impaired from a Body-Mounted Camera

Shrinivas Pundlik; Matteo Tomasi; Gang Luo

A real-time collision detection system using a body-mounted camera is developed for visually impaired and blind people. The system computes sparse optical flow in the acquired videos, compensates for camera self-rotation using external gyro-sensor, and estimates collision risk in local image regions based on the motion estimates. Experimental results for a variety of scenarios involving static and dynamic obstacles are shown in terms of time-to-collision and obstacle localization in test videos. The proposed approach is successful in estimating collision risk for head-on obstacles as well as obstacles that are close to the walking paths of the user. An end-to-end collision warning system based on inputs from a video camera as well as a gyro-sensor has been implemented on a generic laptop and on an embedded OMAP-3 compatible platform. The proposed embedded system represents a valuable contribution toward the development of a portable vision aid for visually impaired and blind patients.


Journal of Vision | 2016

Mobile gaze tracking system for outdoor walking behavioral studies

Matteo Tomasi; Shrinivas Pundlik; Alex R. Bowers; Eli Peli; Gang Luo

Most gaze tracking techniques estimate gaze points on screens, on scene images, or in confined spaces. Tracking of gaze in open-world coordinates, especially in walking situations, has rarely been addressed. We use a head-mounted eye tracker combined with two inertial measurement units (IMU) to track gaze orientation relative to the heading direction in outdoor walking. Head movements relative to the body are measured by the difference in output between the IMUs on the head and body trunk. The use of the IMU pair reduces the impact of environmental interference on each sensor. The system was tested in busy urban areas and allowed drift compensation for long (up to 18 min) gaze recording. Comparison with ground truth revealed an average error of 3.3° while walking straight segments. The range of gaze scanning in walking is frequently larger than the estimation error by about one order of magnitude. Our proposed method was also tested with real cases of natural walking and it was found to be suitable for the evaluation of gaze behaviors in outdoor environments.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2017

Magnifying Smartphone Screen using Google Glass for Low-Vision Users

Shrinivas Pundlik; Huaqi Yi; Rui Liu; Eli Peli; Gang Luo

Magnification is a key accessibility feature used by low-vision smartphone users. However, small screen size can lead to loss of context and make interaction with magnified displays challenging. We hypothesize that controlling the viewport with head motion can be natural and help in gaining access to magnified displays. We implement this idea using a Google Glass that displays the magnified smartphone screenshots received in real time via Bluetooth. Instead of navigating with touch gestures on the magnified smartphone display, the users can view different screen locations by rotating their head, and remotely interacting with the smartphone. It is equivalent to looking at a large virtual image through a head contingent viewing port, in this case, the Glass display with


Investigative Ophthalmology & Visual Science | 2015

Evaluation of a Portable Collision Warning Device for Patients With Peripheral Vision Loss in an Obstacle Course.

Shrinivas Pundlik; Matteo Tomasi; Gang Luo

\sim 15^{\circ}


fuzzy systems and knowledge discovery | 2012

Collision risk estimation from an uncalibrated moving camera based on feature points tracking and clustering

Shrinivas Pundlik; Gang Luo

field of view. The system can transfer seven screenshots per second at 8


bioRxiv | 2018

The Roles of Different Spatial Frequency Channels in Real-World Visual Motion Perception

Cong Shi; Shrinivas Pundlik; Gang Luo

\,\times


Journal of Vision | 2015

From small to large, all saccades follow the same timeline

Shrinivas Pundlik; Russell L. Woods; Gang Luo

magnification, sufficient for tasks where the display content does not change rapidly. A pilot evaluation of this approach was conducted with eight normally sighted and four visually impaired subjects performing assigned tasks using calculator and music player apps. Results showed that performance in the calculation task was faster with the Glass than with the phone’s built-in screen zoom. We conclude that head contingent scanning control can be beneficial in navigating magnified small smartphone displays, at least for tasks involving familiar content layout.


Journal of Real-time Image Processing | 2016

FPGA---DSP co-processing for feature tracking in smart video sensors

Matteo Tomasi; Shrinivas Pundlik; Gang Luo

PURPOSE A pocket-sized collision warning device equipped with a video camera was developed to predict impending collisions based on time to collision rather than proximity. A study was conducted in a high-density obstacle course to evaluate the effect of the device on collision avoidance in people with peripheral field loss (PFL). METHODS The 41-meter-long loop-shaped obstacle course consisted of 46 stationary obstacles from floor to head level and oncoming pedestrians. Twenty-five patients with tunnel vision (n = 13) or hemianopia (n = 12) completed four consecutive loops with and without the device, while not using any other habitual mobility aid. Walking direction and device usage order were counterbalanced. Number of collisions and preferred percentage of walking speed (PPWS) were compared within subjects. RESULTS Collisions were reduced significantly by approximately 37% (P < 0.001) with the device (floor-level obstacles were excluded because the device was not designed for them). No patient had more collisions when using the device. Although the PPWS were also reduced with the device from 52% to 49% (P = 0.053), this did not account for the lower number of collisions, as the changes in collisions and PPWS were not correlated (P = 0.516). CONCLUSIONS The device may help patients with a wide range of PFL avoid collisions with high-level obstacles while barely affecting their walking speed.


computer vision and pattern recognition | 2013

Stabilization of Magnified Videos on a Mobile Device for Visually Impaired

Zewen Li; Shrinivas Pundlik; Gang Luo

We present an approach to estimate collision risk using a single uncalibrated camera attached to a moving platform. The proposed approach is based on computing the local scale change from image motion information obtained by tracking feature points. A fuzzy logic based thresholding step is applied to the tracked feature points to obtain the set of feature points that most likely represent the potential obstacle. The resultant set of points is clustered and the time-to-collision values for the corresponding clusters can be computed to determine the risk of collision. We perform collision detection experiments on three image sequences obtained from a moving car. The results indicate that the proposed approach can estimate collision risk with a single uncalibrated camera.

Collaboration


Dive into the Shrinivas Pundlik's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eli Peli

Massachusetts Eye and Ear Infirmary

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin E. Houston

Massachusetts Eye and Ear Infirmary

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge