Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jayan Eledath is active.

Publication


Featured researches published by Jayan Eledath.


IEEE Transactions on Intelligent Transportation Systems | 2009

Collision Sensing by Stereo Vision and Radar Sensor Fusion

Shunguang Wu; Stephen Decker; Peng Chang; Theodore Camus; Jayan Eledath

To take the advantages of both stereo cameras and radar, this paper proposes a fusion approach to accurately estimate the location, size, pose and motion information of a threat vehicle with respect to the host from observations obtained by both sensors. To do that, we first fit the contour of a threat vehicle from stereo depth information, and find the closest point on the contour from the vision sensor. Then the fused closest point is obtained by fusing radar observations and the vision closest point. Next by translating the fitted contour to the fused closest point, the fused contour is obtained. Finally the fused contour is tracked by using the rigid body constraints to estimate the location, size, pose and motion of the threat vehicle. Experimental results from both the synthetic data and the real world road test data demonstrate the success of the proposed algorithm.


workshop on applications of computer vision | 2007

Egomotion Estimation in Monocular Infra-red Image Sequence for Night Vision Applications

Sang-Hack Jung; Jayan Eledath; Stefan Johansson; Vincent Mathevon

This paper presents a real-time egomotion estimation scheme that is specifically designed for measuring vehicle motion from a monocular infra-red image sequence at night time. Conventional methods of estimating camera motion such as optical-flow based method or direct-method that depends on brightness constancy do not work well in relatively low resolution infra-red imagery due to lack of textureness. In this paper, we propose a method that is based on aggregate feature tracking that is constrained by focus of expansion from instantaneous vehicle motion. Qualitative analysis of error is presented by comparing with groundtruth accelerometer data. An application of ground plane estimation combined with vanishing point estimation is also presented for robust pose estimation


international conference on robotics and automation | 2010

A real-time pedestrian detection system based on structure and appearance classification

Mayank Bansal; Sang-Hack Jung; Bogdan Matei; Jayan Eledath; Harpreet S. Sawhney

We present a real-time pedestrian detection system based on structure and appearance classification. We discuss several novel ideas that contribute to having low-false alarms and high detection rates, while at the same time achieving computational efficiency: (i) At the front end of our system we employ stereo to detect pedestrians in 3D range maps using template matching with a representative 3D shape model, and to classify other background objects in the scene such as buildings, trees and poles. The structure classification efficiently labels substantial amount of non-relevant image regions and guides the further computationally expensive process to focus on relatively small image parts; (ii)We improve the appearance-based classifiers based on HoG descriptors by performing template matching with 2D human shape contour fragments that results in improved localization and accuracy; (iii) We build a suite of classifiers tuned to specific distance ranges for optimized system performance. Our method is evaluated on publicly available datasets and is shown to match or exceed the performance of leading pedestrian detectors in terms of accuracy as well as achieving real-time computation (10 Hz), which makes it adequate for in-vehicle navigation platform.


computer vision and pattern recognition | 2009

Real-time vehicle detection for highway driving

Ben Southall; Mayank Bansal; Jayan Eledath

We present a new multi-stage algorithm for car and truck detection from a moving vehicle. The algorithm performs a search for pertinent features in three dimensions, guided by a ground plane and lane boundary estimation sub-system, and assembles these features into vehicle hypotheses. A number of classifiers are applied to the hypotheses in order to remove false detections. Quantitative analysis on real-world test data show a detection rate of 99.4% and a false positive rate of 1.77%; a result that compares favourably with other systems in the literature.


ieee intelligent vehicles symposium | 2008

Collision sensing by stereo vision and radar sensor fusion

Shunguang Wu; Stephen Decker; Peng Chang; Theodore Camus; Jayan Eledath

To take the advantages of both stereo cameras and radar, this paper proposes a fusion approach to accurately estimate the location, size, pose and motion information of a threat vehicle with respect to the host from observations obtained by both sensors. To do that, we first fit the contour of a threat vehicle from stereo depth information, and find the closest point on the contour from the vision sensor. Then the fused closest point is obtained by fusing radar observations and the vision closest point. Next by translating the fitted contour to the fused closest point, the fused contour is obtained. Finally the fused contour is tracked by using the rigid body constraints to estimate the location, size, pose and motion of the threat vehicle. Experimental results from both the synthetic data and the real world road test data demonstrate the success of the proposed algorithm.


medical image computing and computer assisted intervention | 2013

Interactive Retinal Vessel Extraction by Integrating Vessel Tracing and Graph Search

Lu Wang; Vinutha Kallem; Mayank Bansal; Jayan Eledath; Harpreet S. Sawhney; Karen A. Karp; Denise J. Pearson; Monte D. Mills; Graham E. Quinn; Richard A. Stone

Despite recent advances, automatic blood vessel extraction from low quality retina images remains difficult. We propose an interactive approach that enables a user to efficiently obtain near perfect vessel segmentation with a few mouse clicks. Given two seed points, the approach seeks an optimal path between them by minimizing a cost function. In contrast to the Live-Vessel approach, the graph in our approach is based on the curve fragments generated with vessel tracing instead of individual pixels. This enables our approach to overcome the shortcut problem in extracting tortuous vessels and the problem of vessel interference in extracting neighboring vessels in minimal-cost path techniques, resulting in less user interaction for extracting thin and tortuous vessels from low contrast images. It also makes the approach much faster.


international conference on computer vision | 2009

Pedestrian detection with depth-guided structure labeling

Mayank Bansal; Bogdan Matei; Harpreet S. Sawhney; Sang-Hack Jung; Jayan Eledath

We propose a principled statistical approach for using 3D information and scene context to reduce the number of false positives in stereo based pedestrian detection. Current pedestrian detection algorithms have focused on improving the discriminability of 2D features that capture the pedestrian appearance, and on using various classifier architectures. However, there has been less focus on exploiting the geometry and spatial context in the scene to improve pedestrian detection performance. We make several contributions: (i) we define a new 3D feature, called a Vertical Support Histogram, from dense stereo range maps to locally characterize 3D structure; (ii) we estimate the likelihoods of these 3D features using kernel density estimation, and use them within a Markov Random Field (MRF) to enforce spatial constraints between the features, and to obtain the Maximum A-Posteriori (MAP) scene labeling; (iii) we employ the MAP scene labelings to reduce the number of candidate windows that are tested by a standard, state-of-the-art pedestrian appearance classifier. We evaluate our algorithm on a very challenging, publicly available stereo dataset and compare the performance with state-of-the-art methods.


international conference on intelligent transportation systems | 2008

Vision-based Perception for Autonomous Urban Navigation

Mayank Bansal; Aveek Das; Greg Kreutzer; Jayan Eledath; Rakesh Kumar; Harpreet S. Sawhney

We describe a low-cost vision-based sensing and positioning system that enables intelligent vehicles of the future to autonomously drive in an urban environment with traffic. The system was built by integrating Sarnoffs algorithms for driver awareness and vehicle safety with commercial off-the-shelf hardware on a robot vehicle. We implemented a modular and parallelized software architecture that allowed us to achieve an overall sensor update rate of 12 Hz with multiple high resolution HD cameras without sacrificing robustness and infield field performance. The system was field tested on the team autonomous solutions vehicle, one of the top twenty teams in the 2007 DARPA Urban Challenge competition. In addition to enabling autonomy, our low-cost perception system has an intermediate advantage of providing driver awareness for convenience functions such as adaptive cruise control, lane departure sensing and forward and side-collision warning.


Investigative Ophthalmology & Visual Science | 2010

Utility of Digital Stereo Images for Optic Disc Evaluation

Richard A. Stone; Gui-shuang Ying; Denise J. Pearson; Mayank Bansal; Manika Puri; E. Miller; Judith Alexander; Jody R. Piltz-Seymour; William Nyberg; Maureen G. Maguire; Jayan Eledath; Harpreet S. Sawhney

PURPOSE To assess the suitability of digital stereo images for optic disc evaluations in glaucoma. METHODS Stereo color optic disc images in both digital and 35-mm slide film formats were acquired contemporaneously from 29 subjects with various cup-to-disc ratios (range, 0.26-0.76; median, 0.475). Using a grading scale designed to assess image quality, the ease of visualizing optic disc features important for glaucoma diagnosis, and the comparative diameters of the optic disc cup, experienced observers separately compared the primary digital stereo images to each subjects 35-mm slides, to scanned images of the same 35-mm slides, and to grayscale conversions of the digital images. Statistical analysis accounted for multiple gradings and comparisons and also assessed image formats under monoscopic viewing. RESULTS Overall, the quality of primary digital color images was judged superior to that of 35-mm slides (P < 0.001), including improved stereo (P < 0.001), but the primary digital color images were mostly equivalent to the scanned digitized images of the same slides. Color seemingly added little to grayscale optic disc images, except that peripapillary atrophy was best seen in color (P < 0.0001); both the nerve fiber layer (P < 0.0001) and the paths of blood vessels on the optic disc (P < 0.0001) were best seen in grayscale. The preference for digital over film images was maintained under monoscopic viewing conditions. CONCLUSIONS Digital stereo optic disc images are useful for evaluating the optic disc in glaucoma and allow the application of advanced image processing applications. Grayscale images, by providing luminance distinct from color, may be informative for assessing certain features.


international conference on robotics and automation | 2011

A LIDAR streaming architecture for mobile robotics with application to 3D structure characterization

Mayank Bansal; Bogdan Calin Mihai Matei; Ben Southall; Jayan Eledath; Harpreet S. Sawhney

We present a novel LIDAR streaming architecture for real-time, on-board processing using unmanned robots. We propose a two-level 3D data structure that allows pipelined and streaming processing of the 3D data as it arrives from a moving robot: (i) at the coarse level, the incoming 3D scans are stored in memory in a dense 3D voxel grid with a relatively large voxel size - this ensures buffering of the most recent data and the availability of sufficient 3D measurements within a specific processing volume at the next level; (ii) at the fine level, we employ a data chunking mechanism guided by the movement of the robot and a rolling dense 3D voxel grid for processing the data in the immediate vicinity of the robot, which enables reuse of previously computed features. The architecture proposed requires a very small memory footprint, minimal data copying, and allows a fast spatial access for 3D data, even at the finest resolutions. We illustrate the proposed streaming architecture on a real-time 3D structure characterization task for detecting doors and stairs in indoor environments and show qualitative results demonstrating the effectiveness of our approach.

Collaboration


Dive into the Jayan Eledath's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard A. Stone

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Denise J. Pearson

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge