Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Supreeth Achar is active.

Publication


Featured researches published by Supreeth Achar.


intelligent robots and systems | 2011

Yield estimation in vineyards by visual grape detection

Stephen Nuske; Supreeth Achar; Terry Bates; Srinivasa G. Narasimhan; Sanjiv Singh

The harvest yield in vineyards can vary significantly from year to year and also spatially within plots due to variations in climate, soil conditions and pests. Fine grained knowledge of crop yields can allow viticulturists to better manage their vineyards. The current industry practice for yield prediction is destructive, expensive and spatially sparse - during the growing season sparse samples are taken and extrapolated to determine overall yield. We present an automated method that uses computer vision to detect and count grape berries. The method could potentially be deployed across large vineyards taking measurements at every vine in a non-destructive manner. Our berry detection uses both shape and visual texture and we can demonstrate detection of green berries against a green leaf background. Berry detections are counted and the eventual harvest yield is predicted. Results are presented for 224 vines (over 450 meters) of two different grape varieties and compared against the actual harvest yield as groundtruth. We calibrate our berry count to yield and find that we can predict yield of individual vineyard rows to within 9.8% of actual crop weight.


Journal of Field Robotics | 2014

Automated Visual Yield Estimation in Vineyards

Stephen Nuske; Kyle Wilshusen; Supreeth Achar; Luke Yoder; Sanjiv Singh

We present a vision system that automatically predicts yield in vineyards accurately and with high resolution. Yield estimation traditionally requires tedious hand measurement, which is destructive, sparse in sampling, and inaccurate. Our method is efficient, high-resolution, and it is the first such system evaluated in realistic experimentation over several years and hundreds of vines spread over several acres of different vineyards. Other existing research is limited to small test sets of 10 vines or less, or just isolated grape clusters, with tightly controlled image acquisition and with artificially induced yield distributions. The system incorporates cameras and illumination mounted on a vehicle driving through the vineyard. We process images by exploiting the three prominent visual cues of texture, color, and shape into a strong classifier that detects berries even when they are of similar color to the vine leaves. We introduce methods to maximize the spatial and the overall accuracy of the yield estimates by optimizing the relationship between image measurements and yield. Our experimentation is conducted over four growing seasons in several wine and table-grape vineyards. These are the first such results from experimentation that is sufficiently sized for fair evaluation against true yield variation and real-world imaging conditions from a moving vehicle. Analysis of the results demonstrates yield estimates that capture up to 75% of spatial yield variance and with an average error between 3% and 11% of total yield.


international conference on robotics and automation | 2008

Autonomous image-based exploration for mobile robot navigation

D Santosh; Supreeth Achar; C. V. Jawahar

Image-based navigation paradigms have recently emerged as an interesting alternative to conventional model- based methods in mobile robotics. In this paper, we augment the existing image-based navigation approaches by presenting a novel image-based exploration algorithm. The algorithm facilitates a mobile robot equipped only with a monocular pan-tilt camera to autonomously explore a typical indoor environment. The algorithm infers frontier information directly from the images and displaces the robot towards regions that are informative for navigation. The frontiers are detected using a geometric context-based segmentation scheme that exploits the natural scene structure in indoor environments. In the due process, a topological graph of the workspace is built in terms of images which can be subsequently utilised for the tasks of localisation, path planning and navigation. Experimental results on a mobile robot in an unmodified laboratory and corridor environments demonstrate the validity of the approach.


intelligent robots and systems | 2011

Perception for a river mapping robot

Andrew Chambers; Supreeth Achar; Stephen Nuske; Joern Rehder; Bernd Kitt; Lyle Chamberlain; Justin Haines; Sebastian Scherer; Sanjiv Singh

Rivers with heavy vegetation are hard to map from the air. Here we consider the task of mapping their course and the vegetation along the shores with the specific intent of determining river width and canopy height. A complication in such riverine environments is that only intermittent GPS may be available depending on the thickness of the surrounding canopy. We present a multimodal perception system to be used for the active exploration and mapping of a river from a small rotorcraft flying a few meters above the water. We describe three key components that use computer vision, laser scanning, and inertial sensing to follow the river without the use of a prior map, estimate motion of the rotorcraft, ensure collision-free operation, and create a three dimensional representation of the riverine environment. While the ability to fly simplifies the navigation problem, it also introduces an additional set of constraints in terms of size, weight and power. Hence, our solutions are cognizant of the need to perform multi-kilometer missions with a small payload. We present experimental results along a 2km loop of river using a surrogate system.


international conference on robotics and automation | 2011

Self-supervised segmentation of river scenes

Supreeth Achar; Bharath Sankaran; Stephen Nuske; Sebastian Scherer; Sanjiv Singh

Here we consider the problem of automatically segmenting images taken from a boat or low-flying aircraft. Such a capability is important for autonomous river following and mapping. The need for accurate segmentation in a wide variety of riverine environments challenges the state of the art vision-based methods that have been used in more structured environments such as roads and highways. Apart from the lack of structure, the principal difficulty is the large spatial and temporal variations in the appearance of water in the presence of nearby vegetation and with reflections from the sky. We propose a self-supervised method to segment images into ‘sky’, ‘river’ and ‘shore’ (vegetation + structures) regions. Our approach uses assumptions about river scene structure to learn appearance models based on features like color, texture and image location which are used to segment the image. We validated our algorithm by testing on four datasets captured under varying conditions on different rivers. Our self-supervised algorithm had higher accuracy rates than a supervised alternative, often significantly more accurate, and does not need to be retrained to work under different conditions.


international conference on robotics and automation | 2008

Visual servoing based on Gaussian mixture models

A. H. Abdul Hafez; Supreeth Achar; C. V. Jawahar

In this paper we present a novel approach to robust visual servoing. This method removes the feature tracking step from a typical visual servoing algorithm. We do not need correspondences of the features for deriving the control signal. This is achieved by modeling the image features as a mixture of Gaussians in the current as well as desired images. Using Lyapunov theory, a control signal is derived to minimize a distance function between the two Gaussian mixtures. The distance function is given in a closed form, and its gradient can be efficiently computed and used to control the system. For simplicity, we first consider the 2D motion case. Then, the general case is presented by introducing the depth distribution of the features to control the six degrees of freedom. Experiments are conducted within a simulation framework to validate our proposed method.


international conference on computer vision | 2013

Compensating for Motion during Direct-Global Separation

Supreeth Achar; Stephen Nuske; Srinivasa G. Narasimhan

Separating the direct and global components of radiance can aid shape recovery algorithms and can provide useful information about materials in a scene. Practical methods for finding the direct and global components use multiple images captured under varying illumination patterns and require the scene, light source and camera to remain stationary during the image acquisition process. In this paper, we develop a motion compensation method that relaxes this condition and allows direct-global separation to be performed on video sequences of dynamic scenes captured by moving projector-camera systems. Key to our method is being able to register frames in a video sequence to each other in the presence of time varying, high frequency active illumination patterns. We compare our motion compensated method to alternatives such as single shot separation and frame interleaving as well as ground truth. We present results on challenging video sequences that include various types of motions and deformations in scenes that contain complex materials like fabric, skin, leaves and wax.


2011 Louisville, Kentucky, August 7 - August 10, 2011 | 2011

A Camera and Laser System for Automatic Vine Balance Assessment

Ben Grocholsky; Stephen Nuske; Matt Aasted; Supreeth Achar; Terry Bates

Canopy performance, the balance of crop weight and canopy volume, is a key indicator of value in viticultural production. Timely and dense measurement offer the potential to inform management practices and deliver significant improvement in production efficiency. Traditional measurement practices are labor intensive and provide sparse data that may not reflect vineyard variability. We propose and demonstrate a combination of visual and laser sensing mounted on vineyard machinery that provides dense maps of canopy performance indicators. Current industry practice for measuring grape crop weight involves manually counting clusters on a vine with destructive sampling to find the average weight of a single cluster. This paper presents an alternative utilizing vision and laser sensing. We demonstrate use of machine vision to automatically estimate the weight of the crop growing on a vine. Validation of the algorithm was performed by comparing weight estimates generated by the system to ground truth measurements collected by hand. Machine mounted laser scanners provide direct measurement of canopy shape and volume. Validation of the canopy volume measurement is provided by correlation with manually collected dormant vine pruning weight. Attaching these laser and camera sensors to vineyard machinery will allow crop weight and canopy volume measurements to be collected on a large scale quickly and economically. Experiments performed at vineyards growing Traminette and Riesling wine grapes and Concord juice grapes show that we were able to determine both crop weight and canopy volume to within 10% of their actual values.


european conference on computer vision | 2014

Multi Focus Structured Light for Recovering Scene Shape and Global Illumination

Supreeth Achar; Srinivasa G. Narasimhan

Illumination defocus and global illumination effects are major challenges for active illumination scene recovery algorithms. Illumination defocus limits the working volume of projector-camera systems and global illumination can induce large errors in shape estimates. In this paper, we develop an algorithm for scene recovery in the presence of both defocus and global light transport effects such as interreflections and sub-surface scattering. Our method extends the working volume by using structured light patterns at multiple projector focus settings. A careful characterization of projector blur allows us to decode even partially out-of-focus patterns. This enables our algorithm to recover scene shape and the direct and global illumination components over a large depth of field while still using a relatively small number of images (typically 25-30). We demonstrate the effectiveness of our approach by recovering high quality depth maps of scenes containing objects made of optically challenging materials such as wax, marble, soap, colored glass and translucent plastic.


international conference on robotics and automation | 2011

Large scale visual localization in urban environments

Supreeth Achar; C. V. Jawahar; K. Madhava Krishna

This paper introduces a vision based localization method for large scale urban environments. The method is based upon Bag-of-Words image retrieval techniques and handles problems that arise in urban environments due to repetitive scene structure and the presence of dynamic objects like vehicles. The localization system was experimentally verified it localization experiments along a 5km long path in an urban environment.

Collaboration


Dive into the Supreeth Achar's collaboration.

Top Co-Authors

Avatar

Stephen Nuske

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sanjiv Singh

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. V. Jawahar

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Sebastian Scherer

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Aravindhan K Krishnan

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

K. Madhava Krishna

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew Chambers

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Kyle Wilshusen

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Luke Yoder

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge