Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vivek Pradeep is active.

Publication


Featured researches published by Vivek Pradeep.


computer vision and pattern recognition | 2010

Robot vision for the visually impaired

Vivek Pradeep; Gérard G. Medioni; James D. Weiland

We present a head-mounted, stereo-vision based navigational assistance device for the visually impaired. The head-mounted design enables our subjects to stand and scan the scene for integrating wide-field information, compared to shoulder or waist-mounted designs in literature which require body rotations. In order to extract and maintain orientation information for creating a sense of egocentricity in blind users, we incorporate visual odometry and feature based metric-topological SLAM into our system. Using camera pose estimates with dense 3D data obtained from stereo triangulation, we build a vicinity map of the users environment. On this map, we perform 3D traversability analysis to steer subjects away from obstacles in the path. A tactile interface consisting of microvibration motors provides cues for taking evasive action, as determined by our vision processing algorithms. We report experimental results of our system (running at 10 Hz) and conduct mobility tests with blindfolded subjects to demonstrate the usefulness of our approach over conventional navigational aids like the white cane.


international symposium on mixed and augmented reality | 2013

MonoFusion: Real-time 3D reconstruction of small scenes with a single web camera

Vivek Pradeep; Christoph Rhemann; Shahram Izadi; Christopher Zach; Michael Bleyer; Steven Bathiche

MonoFusion allows a user to build dense 3D reconstructions of their environment in real-time, utilizing only a single, off-the-shelf web camera as the input sensor. The camera could be one already available in a tablet, phone, or a standalone device. No additional input hardware is required. This removes the need for power intensive active sensors that do not work robustly in natural outdoor lighting. Using the input stream of the camera we first estimate the 6DoF camera pose using a sparse tracking method. These poses are then used for efficient dense stereo matching between the input frame and a key frame (extracted previously). The resulting dense depth maps are directly fused into a voxel-based implicit model (using a computationally inexpensive method) and surfaces are extracted per frame. The system is able to recover from tracking failures as well as filter out geometrically inconsistent noise from the 3D reconstruction. Our method is both simple to implement and efficient, making such systems even more accessible. This paper details the algorithmic components that make up our system and a GPU implementation of our approach. Qualitative results demonstrate high quality reconstructions even visually comparable to active depth sensor-based systems such as KinectFusion.


user interface software and technology | 2016

Holoportation: Virtual 3D Teleportation in Real-time

Sergio Orts-Escolano; Christoph Rhemann; Sean Ryan Fanello; Wayne Chang; Adarsh Prakash Murthy Kowdle; Yury Degtyarev; David Kim; Philip Lindsley Davidson; Sameh Khamis; Mingsong Dou; Vladimir Tankovich; Charles T. Loop; Qin Cai; Philip A. Chou; Sarah Mennicken; Julien P. C. Valentin; Vivek Pradeep; Shenlong Wang; Sing Bing Kang; Pushmeet Kohli; Yuliya Lutchyn; Cem Keskin; Shahram Izadi

We present an end-to-end system for augmented and virtual reality telepresence, called Holoportation. Our system demonstrates high-quality, real-time 3D reconstructions of an entire space, including people, furniture and objects, using a set of new depth cameras. These 3D models can also be transmitted in real-time to remote users. This allows users wearing virtual or augmented reality displays to see, hear and interact with remote participants in 3D, almost as if they were present in the same physical space. From an audio-visual perspective, communicating and interacting with remote users edges closer to face-to-face communication. This paper describes the Holoportation technical system in full, its key interactive capabilities, the application scenarios it enables, and an initial qualitative study of using this new communication medium.


international conference of the ieee engineering in medicine and biology society | 2010

A wearable system for the visually impaired

Vivek Pradeep; Gérard G. Medioni; James D. Weiland

We present a light-weight, cheap and low-power, wearable system for assisting the visually impaired in performing routine mobility tasks. Our system extends the range of the white cane by providing the user with vibro-tactile cues corresponding to the location of obstacles and a safe path for traversal through a cluttered environment. The presented approach keeps cognitive load to a minimum, and while being autonomous, adapts to the changing mobility requirements of a navigating user. In this paper, we provide an overview of the hardware and algorithmic components of our system, and show results of pilot studies with human test subjects. Our system operates at 20Hz, and significantly improves mobility performance compared to using only the white cane.


computer vision and pattern recognition | 2010

Egomotion using assorted features

Vivek Pradeep; Jongwoo Lim

We describe a novel and robust minimal solver for performing online visual odometry with a stereo rig. The proposed method can compute the underlying camera motion given any arbitrary, mixed combination of point and line correspondences across two stereo views. This facilitates a hybrid visual odometry pipeline that is enhanced by well-localized and reliably-tracked line features while retaining the well-known advantages of point features. Utilizing trifocal tensor geometry and quaternion representation of rotation matrices, we develop a polynomial system from which camera motion parameters can be robustly extracted in the presence of noise. We show how the more popular approach of using direct linear/subspace techniques fail in this regard and demonstrate improved performance using our formulation with extensive experiments and comparisons against the 3-point and line-sfm algorithms.


International Journal of Computer Vision | 2012

Egomotion Estimation Using Assorted Features

Vivek Pradeep; Jongwoo Lim

We propose a novel minimal solver for recovering camera motion across two views of a calibrated stereo rig. The algorithm can handle any assorted combination of point and line features across the four images and facilitates a visual odometry pipeline that is enhanced by well-localized and reliably-tracked line features while retaining the well-known advantages of point features. The mathematical framework of our method is based on trifocal tensor geometry and a quaternion representation of rotation matrices. A simple polynomial system is developed from which camera motion parameters may be extracted more robustly in the presence of severe noise, as compared to the conventionally employed direct linear/subspace solutions. This is demonstrated with extensive experiments and comparisons against the 3-point and line-sfm algorithms.


international conference of the ieee engineering in medicine and biology society | 2012

Smart image processing system for retinal prosthesis

James D. Weiland; Neha Jagdish Parikh; Vivek Pradeep; Gérard G. Medioni

Retinal prostheses for the blind have demonstrated the ability to provide the sensation of light in otherwise blind individuals. However, visual task performance in these patients remains poor relative to someone with normal vision. Computer vision algorithms for navigation and object detection were evaluated for their ability to improve task performance. Blind subjects navigating a mobility course had fewer collisions when using a wearable camera system that guided them on a safe path. Subjects using a retinal prosthesis simulator could locate objects more quickly when an object detection algorithm assisted them. Computer vision algorithms can assist retinal prosthesis patients and low-vision patients in general.


Archive | 2012

Wearable electronic image acquisition and enhancement system and method for image acquisition and visual enhancement

James D. Weiland; Mark S. Humayan; Gérard G. Medioni; Armand R. Tanguay; Vivek Pradeep; Laurent Itti


Workshop on Computer Vision Applications for the Visually Impaired | 2008

Piecewise Planar Modeling for Step Detection using Stereo Vision

Vivek Pradeep; Gérard G. Medioni; James D. Weiland


Archive | 2012

Proximity and connection based photo sharing

Stephen Latta; Ken Hinckley; Kevin Geisner; Steven Bathiche; Hrvoje Benko; Vivek Pradeep

Collaboration


Dive into the Vivek Pradeep's collaboration.

Top Co-Authors

Avatar

Gérard G. Medioni

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

James D. Weiland

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge