Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where K. S. Venkatesh is active.

Publication


Featured researches published by K. S. Venkatesh.


Robotics and Autonomous Systems | 2015

New potential field method for rough terrain path planning using genetic algorithm for a 6-wheel rover

Rekha Raja; Ashish Dutta; K. S. Venkatesh

Motion planning of rovers in rough terrains involves two parts of finding a safe path from an initial point to a goal point and also satisfying the path constraints (velocity, wheel torques, etc.) of the rover for traversing the path. In this paper, we propose a new motion planning algorithm on rough terrain for a 6 wheel rover with 10 DOF (degrees of freedom), by introducing a gradient function in the conventional potential field method. The new potential field function proposed consists of an attractive force, repulsive force, tangential force and a gradient force. The gradient force is a function of the roll, pitch and yaw angles of the rover at a particular location on the terrain. The roll, pitch and yaw angles are derived from the kinematic model of the rover. This additional force component ensures that the rover does not go over very high gradients and results in a safe path. Weights are assigned to the various components of the potential field function and the weights are optimized using genetic algorithms to get an optimal path that satisfies the path constraints via a cost function. The kinematic model of the rover is also derived that gives the wheel velocity ratio as it traverses different gradients. Quasi static force analysis ensures stability of the rover and prevents wheel slip. In order to compare different paths, four different objective functions are evaluated each considering energy, wheel slip, traction and length of the path. A comparison is also made between the conventional 2D potential field method and the newly proposed 3D potential field method. Simulation and experimental results show the usefulness of the new method for generating paths in rough terrains. Proposed a new potential field method for rough terrain path planning for a rover.A gradient function is introduced in the conventional potential field method.The gradient function depends on the roll, pitch and yaw angles of the rover.Weights of potential field function are optimized by using GA.Results prove that the new method is superior to conventional potential field method.


asian conference on computer vision | 2006

A multiscale co-linearity statistic based approach to robust background modeling

Prithwijit Guha; Dibyendu Palai; K. S. Venkatesh; Amitabha Mukerjee

Background subtraction is an essential task in several static camera based computer vision systems. Background modeling is often challenged by spatio-temporal changes occurring due to local motion and/or variations in illumination conditions. The background model is learned from an image sequence in a number of stages, viz. preprocessing, pixel/region feature extraction and statistical modeling of feature distribution. A number of algorithms, mainly focusing on feature extraction and statistical modeling have been proposed to handle the problems and comparatively little exploration has occurred at the preprocessing stage. Motivated by the fact that disturbances caused by local motions disappear at lower resolutions, we propose to represent the images at multiple scales in the preprocessing stage to learn a pyramid of background models at different resolutions. During operation, foreground pixels are detected first only at the lowest resolution, and only these pixels are further analyzed at higher resolutions to obtain a precise silhouette of the entire foreground blob. Such a scheme is also found to yield a significant reduction in computation. The second contribution in this paper involves the use of the co-linearity statistic (introduced by Mester et al. for the purpose of illumination independent change detection in consecutive frames) as a pixel neighborhood feature by assuming a linear model with a signal modulation factor and additive noise. The use of co-linearity statistic as a feature has shown significant performance improvement over intensity or combined intensity-gradient features. Experimental results and performance comparisons (ROC curves) for the proposed approach with other algorithms show significant improvements for several test sequences.


advances in computing and communications | 2014

High accuracy depth filtering for Kinect using edge guided inpainting

Saumik Bhattacharya; Sumana Gupta; K. S. Venkatesh

Kinect is an easy and convenient means to calculate the depth of a scene in real time. It is used widely in several applications for its ease of installation and handling. Many of these applications need a high accuracy depth map of the scene for rendering. Unfortunately, the depth map provided by Kinect suffers from various degradations due to occlusion, shadowing, scattering etc. The major two degradations are the edge distortion and shadowing. Edge distortion appears due to the intrinsic properties of Kinect and makes any depth based operation perceptually degraded. The problem of edge distortion removal has not received as much attention as the hole filling problem, though it is considerably important at the post processing stage of a RGB scene. We propose a novel method to remove line distortion in order to construct high accuracy depth map of the scene by exploiting the edge information already present in the RGB image.


International Journal of Computer and Electrical Engineering | 2015

People Counting in High Density Crowds from Still Images

Ankan Bansal; K. S. Venkatesh

We present a method of estimating the number of people in high density crowds from still images. The method estimates counts by fusing information from multiple sources. Most of the existing work on crowd counting deals with very small crowds (tens of individuals) and use temporal information from videos. Our method uses only still images to estimate the counts in high density images (hundreds to thousands of individuals). At this scale, we cannot rely on only one set of features for count estimation. We, therefore, use multiple sources, viz. interest points (SIFT), Fourier analysis, wavelet decomposition, GLCM features and low confidence head detections, to estimate the counts. Each of these sources gives a separate estimate of the count along with confidences and other statistical measures which are then combined to obtain the final estimate. We test our method on an existing dataset of fifty images containing over 64000 individuals. Further, we added another fifty annotated images of crowds and tested on the complete dataset of hundred images containing over 87000 individuals. The counts per image range from 81 to 4633. We report the performance in terms of mean absolute error, which is a measure of accuracy of the method, and mean normalised absolute error, which is a measure of the robustness.


digital television conference | 2013

Spatio-temporal multi-view synthesis for free viewpoint television

Katta Phani Kumar; Sumana Gupta; K. S. Venkatesh

Interest in view synthesis is growing rapidly as it has tremendous applications in free viewpoint television (FTV), 3DTV, games, virtual reality etc. The main problem of view synthesis is that the virtual view contains holes in disoccluded regions. We propose a hole-filling algorithm to fill the disocclusion holes in the virtual view by exploiting the temporal information of the reference views. We also propose an algorithm to avoid the shining of background pixel through a foreground object in the virtual view due to the absence of foreground pixel information. We generate different zoomed views of the scene by applying the concept of view synthesis and observe the variation of holes with different zoom scales. Finally, we propose and demonstrate depth based image segmentation to facilitate parallel computing. Experimental results show that good quality virtual views are generated with high PSNR and with fewer artifacts.


Signal, Image and Video Processing | 2017

Perceptual synoptic view-based video retrieval using metadata

Sinnu Susan Thomas; Sumana Gupta; K. S. Venkatesh

Content-based video retrieval and video synopsis are generally considered as two different areas. In this paper, we present an efficient approach for video retrieval based on the perceptual synopsis database of the videos. Video synopsis encapsulates an overview of a shot in a single frame. This is the first time video synopsis is used for video indexing providing the user an intuitive link for accessing actions in the video. We propose an enhanced synopsis called meta synopsis for the video database index, which will contain all essential information for retrieval. Various information such as background of a scene, motion trajectory of the foreground objects, color, texture, and mutual information in the synopsis database will empower us to retrieve relevant video content from huge video databases. Experiments were conducted on the OVP, BBC Motion Gallery, TRECVID data set, and other videos. Instead of using key frames as the query frames, the method accepts any arbitrary query frames. The experimental results illustrate that our proposed method can accurately identify a pertinent video from huge video databases.


international conference on image processing | 2016

Dehazing of color image using stochastic enhancement

Saumik Bhattacharya; Sumana Gupta; K. S. Venkatesh

Images captured in presence of fog, haze or snow usually suffer from poor contrast and visibility. In this paper we propose a novel dehazing method to increase visibility from a single view without using any prior knowledge about the outdoor scene. The proposed method estimates a visibility map of the scene from the input image and uses stochastic iterative algorithm to remove fog and haze. The method can be applied to color and grayscale images. Experimental results show that the proposed algorithm outperforms most of the state-of-the-art algorithms in terms of contrast, colorfulness and visibility.


international multiconference on computer science and information technology | 2010

PSO based modeling of Takagi-Sugeno fuzzy motion controller for dynamic object tracking with mobile platform

Meenakshi Gupta; Laxmidhar Behera; K. S. Venkatesh

Modeling of optimized motion controller is one of the interesting problems in the context of behavior based mobile robotics. Behavior based mobile robots should have an ideal controller to generate perfect action. In this paper, a nonlinear identification Takagi-Sugeno fuzzy motion controller has been designed to track the positions of a moving object with the mobile platform. The parameters of the controller are optimized with Particle swarm optimization (PSO) and stochastic approximation method. A gray predictor has also been developed to predict the position of the object when object is beyond the view field of the robot. The combined model has been tested on a Pioneer robot which tracks a triangular red box using a CCD camera and a laser sensor.


pacific rim international conference on artificial intelligence | 2006

Appearance based multiple agent tracking under complex occlusions

Prithwijit Guha; Amitabha Mukerjee; K. S. Venkatesh

Agents entering the field of view can undergo two different forms of occlusions, either caused by crowding or due to obstructions by background objects at finite distances from the camera. This work aims at identifying the nature of occlusions encountered in multi-agent tracking by using a set of qualitative primitives derived on the basis of the Persistence Hypothesis - objects continue to exist even when hidden from view. We construct predicates describing a comprehensive set of possible occlusion primitives including entry/exit, partial or complete occlusions by background objects, crowding and algorithm failures resulting from track loss. Instantiation of these primitives followed by selective agent feature updates enables us to develop an effective scheme for tracking multiple agents in relatively unconstrained environments. The agents are primarily detected as foreground blobs and are characterized by their centroid trajectory and a non-parametric appearance model learned over the associated pixel co-ordinate and color space. The agents are tracked through a three stage process of motion based prediction, agent-blob association with occlusion primitive identification and appearance model aided agent localization for the occluded ones. The occluded agents are localized within associated foreground regions by a process of iterative foreground pixel assignment to agents followed by their centroid update. Satisfactory tracking performance is observed by employing the proposed algorithm on a traffic video sequence containing complex multi-agent interactions.


international conference on robotics and automation | 2006

Efficient continuous re-grasp planning for moving and deforming planar objects

Tripuresh Mishra; Prithwijit Guha; Ashish Dutta; K. S. Venkatesh

A novel approach to real-time tracking of three-finger planar grasp points for deforming objects is proposed. The search space of possible grasping configurations is reduced in two stages - firstly, by fixing one finger at the boundary point nearest to the object centroid and secondly, through a heuristic partitioning of the object boundary where the remaining two fingers are localized. The potential grasping configurations satisfying force closure conditions are evaluated through an objective function that maximizes the grasping span while minimizing the distance between the object centroid and the intersection of the contour normals at the finger contact points. A population based stochastic search strategy is adopted for computing the optimal grasping configurations and re-localizing them as the shape undergoes drastic translations, rotations, scaling and local deformations. Experimental results of grasp point tracking are presented for deforming planar shapes extracted from both real and synthetic image sequences. The current implementation of the proposed scheme operates at 10 Hz for grasp point tracking on shapes extracted through visual feedback

Collaboration


Dive into the K. S. Venkatesh's collaboration.

Top Co-Authors

Avatar

Sumana Gupta

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Prithwijit Guha

Indian Institute of Technology Guwahati

View shared research outputs
Top Co-Authors

Avatar

Amitabha Mukerjee

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Saumik Bhattacharya

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Ashish Dutta

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Raju Ranjan

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Mahesh Kr. Singh

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Himanshu Kumar

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Laxmidhar Behera

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

Rupesh Kumar

Indian Institute of Technology Kanpur

View shared research outputs
Researchain Logo
Decentralizing Knowledge