Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen Nuske is active.

Publication


Featured researches published by Stephen Nuske.


intelligent robots and systems | 2011

Yield estimation in vineyards by visual grape detection

Stephen Nuske; Supreeth Achar; Terry Bates; Srinivasa G. Narasimhan; Sanjiv Singh

The harvest yield in vineyards can vary significantly from year to year and also spatially within plots due to variations in climate, soil conditions and pests. Fine grained knowledge of crop yields can allow viticulturists to better manage their vineyards. The current industry practice for yield prediction is destructive, expensive and spatially sparse - during the growing season sparse samples are taken and extrapolated to determine overall yield. We present an automated method that uses computer vision to detect and count grape berries. The method could potentially be deployed across large vineyards taking measurements at every vine in a non-destructive manner. Our berry detection uses both shape and visual texture and we can demonstrate detection of green berries against a green leaf background. Berry detections are counted and the eventual harvest yield is predicted. Results are presented for 224 vines (over 450 meters) of two different grape varieties and compared against the actual harvest yield as groundtruth. We calibrate our berry count to yield and find that we can predict yield of individual vineyard rows to within 9.8% of actual crop weight.


international symposium on experimental robotics | 2013

Automated Crop Yield Estimation for Apple Orchards

Qi Wang; Stephen Nuske; Marcel Bergerman; Sanjiv Singh

Crop yield estimation is an important task in apple orchard manage- ment. The current manual sampling-based yield estimation is time-consuming, la- bor-intensive and inaccurate. To deal with this challenge, we developed a comput- er vision-based system for automated, rapid and accurate yield estimation. The system uses a two-camera stereo rig for image acquisition. It works at nighttime with controlled artificial lighting to reduce the variance of natural illumination. An autonomous orchard vehicle is used as the support platform for automated data collection. The system scans both sides of each tree row in orchards. A computer vision algorithm detects and registers apples from acquired sequential images, and then generates apple counts as crop yield estimation. We deployed the yield esti- mation system in Washington state in September, 2011. The results show that the system works well with both red and green apples in the tall-spindle planting sys- tem. The crop yield estimation errors are -3.2% for a red apple block with about 480 trees, and 1.2% for a green apple block with about 670 trees.


Journal of Field Robotics | 2014

Automated Visual Yield Estimation in Vineyards

Stephen Nuske; Kyle Wilshusen; Supreeth Achar; Luke Yoder; Sanjiv Singh

We present a vision system that automatically predicts yield in vineyards accurately and with high resolution. Yield estimation traditionally requires tedious hand measurement, which is destructive, sparse in sampling, and inaccurate. Our method is efficient, high-resolution, and it is the first such system evaluated in realistic experimentation over several years and hundreds of vines spread over several acres of different vineyards. Other existing research is limited to small test sets of 10 vines or less, or just isolated grape clusters, with tightly controlled image acquisition and with artificially induced yield distributions. The system incorporates cameras and illumination mounted on a vehicle driving through the vineyard. We process images by exploiting the three prominent visual cues of texture, color, and shape into a strong classifier that detects berries even when they are of similar color to the vine leaves. We introduce methods to maximize the spatial and the overall accuracy of the yield estimates by optimizing the relationship between image measurements and yield. Our experimentation is conducted over four growing seasons in several wine and table-grape vineyards. These are the first such results from experimentation that is sufficiently sized for fair evaluation against true yield variation and real-world imaging conditions from a moving vehicle. Analysis of the results demonstrates yield estimates that capture up to 75% of spatial yield variance and with an average error between 3% and 11% of total yield.


intelligent robots and systems | 2011

Perception for a river mapping robot

Andrew Chambers; Supreeth Achar; Stephen Nuske; Joern Rehder; Bernd Kitt; Lyle Chamberlain; Justin Haines; Sebastian Scherer; Sanjiv Singh

Rivers with heavy vegetation are hard to map from the air. Here we consider the task of mapping their course and the vegetation along the shores with the specific intent of determining river width and canopy height. A complication in such riverine environments is that only intermittent GPS may be available depending on the thickness of the surrounding canopy. We present a multimodal perception system to be used for the active exploration and mapping of a river from a small rotorcraft flying a few meters above the water. We describe three key components that use computer vision, laser scanning, and inertial sensing to follow the river without the use of a prior map, estimate motion of the rotorcraft, ensure collision-free operation, and create a three dimensional representation of the riverine environment. While the ability to fly simplifies the navigation problem, it also introduces an additional set of constraints in terms of size, weight and power. Hence, our solutions are cognizant of the need to perform multi-kilometer missions with a small payload. We present experimental results along a 2km loop of river using a surrogate system.


international conference on robotics and automation | 2012

Global pose estimation with limited GPS and long range visual odometry

Joern Rehder; Kamal Gupta; Stephen Nuske; Sanjiv Singh

Here we present an approach to estimate the global pose of a vehicle in the face of two distinct problems; first, when using stereo visual odometry for relative motion estimation, a lack of features at close range causes a bias in the motion estimate. The other challenge is localizing in the global coordinate frame using very infrequent GPS measurements. Solving these problems we demonstrate a method to estimate and correct for the bias in visual odometry and a sensor fusion algorithm capable of exploiting sparse global measurements. Our graph-based state estimation framework is capable of inferring global orientation using a unified representation of local and global measurements and recovers from inaccurate initial estimates of the state, as intermittently available GPS information may delay the observability of the entire state. We also demonstrate a reduction of the complexity of the problem to achieve real-time throughput. In our experiments, we show in an outdoor dataset with distant features where our bias corrected visual odometry solution makes a fivefold improvement in the accuracy of the estimated translation compared to a standard approach. For a traverse of 2km we demonstrate the capabilities of our graph-based state estimation approach to successfully infer global orientation with as few as 6 GPS measurements and with two-fold improvement in mean position error using the corrected visual odometry.


international conference on robotics and automation | 2013

Infrastructure-free shipdeck tracking for autonomous landing

Sankalp Arora; Sezal Jain; Sebastian Scherer; Stephen Nuske; Lyle Chamberlain; Sanjiv Singh

Shipdeck landing is one of the most challenging tasks for a rotorcraft. Current autonomous rotorcraft use shipdeck mounted transponders to measure the relative pose of the vehicle to the landing pad. This tracking system is not only expensive but renders an unequipped ship unlandable. We address the challenge of tracking a shipdeck without additional infrastructure on the deck. We present two methods based on video and lidar that are able to track the shipdeck starting at a considerable distance from the ship. This redundant sensor design enables us to have two independent tracking systems. We show the results of the tracking algorithms in three different environments - field testing results on actual helicopter flights, in simulation with a moving shipdeck for lidar based tracking and in laboratory using an occluded, and, moving scaled model of a landing deck for camera based tracking. The complimentary modalities allow shipdeck tracking under varying conditions.


Journal of Field Robotics | 2015

Autonomous Exploration and Motion Planning for an Unmanned Aerial Vehicle Navigating Rivers

Stephen Nuske; Sanjiban Choudhury; Sezal Jain; Andrew Chambers; Luke Yoder; Sebastian Scherer; Lyle Chamberlain; Hugh Cover; Sanjiv Singh

Mapping a rivers geometry provides valuable information to help understand the topology and health of an environment and deduce other attributes such as which types of surface vessels could traverse the river. While many rivers can be mapped from satellite imagery, smaller rivers that pass through dense vegetation are occluded. We develop a micro air vehicle MAV that operates beneath the tree line, detects and maps the river, and plans paths around three-dimensional 3D obstacles such as overhanging tree branches to navigate rivers purely with onboard sensing, with no GPS and no prior map. We present the two enabling algorithms for exploration and for 3D motion planning. We extract high-level goal-points using a novel exploration algorithm that uses multiple layers of information to maximize the length of the river that is explored during a mission. We also present an efficient modification to the SPARTAN Sparse Tangential Network algorithm called SPARTAN-lite, which exploits geodesic properties on smooth manifolds of a tangential surface around obstacles to plan rapidly through free space. Using limited onboard resources, the exploration and planning algorithms together compute trajectories through complex unstructured and unknown terrain, a capability rarely demonstrated by flying vehicles operating over rivers or over ground. We evaluate our approach against commonly employed algorithms and compare guidance decisions made by our system to those made by a human piloting a boat carrying our system over multiple kilometers. We also present fully autonomous flights on riverine environments generating 3D maps over several hundred-meter stretches of tight winding rivers.


international conference on robotics and automation | 2010

Vision-based localization using an edge map extracted from 3D laser range data

Paulo Vinicius Koerich Borges; Robert Zlot; Michael Bosse; Stephen Nuske; Ashley Tews

Reliable real-time localization is a key component of autonomous industrial vehicle systems. We consider the problem of using on-board vision to determine a vehicles pose in a known, but non-static, environment. While feasible technologies exist for vehicle localization, many are not suited for industrial settings where the vehicle must operate dependably both indoors and outdoors and in a range of lighting conditions. We extend the capabilities of an existing vision-based localization system, in a continued effort to improve the robustness, reliability and utility of an automated industrial vehicle system. The vehicle pose is estimated by comparing an edge-filtered version of a video stream to an available 3D edge map of the site. We enhance the previous system by additionally filtering the camera input for straight lines using a Hough transform, observing that the 3D environment map contains only linear features. In addition, we present an automated approach for generating 3D edge maps from laser point clouds, removing the need for manual map surveying and also reducing the time for map generation down from days to minutes. We present extensive localization results in multiple lighting conditions comparing the system with and without the proposed enhancements.


international conference on robotics and automation | 2011

Self-supervised segmentation of river scenes

Supreeth Achar; Bharath Sankaran; Stephen Nuske; Sebastian Scherer; Sanjiv Singh

Here we consider the problem of automatically segmenting images taken from a boat or low-flying aircraft. Such a capability is important for autonomous river following and mapping. The need for accurate segmentation in a wide variety of riverine environments challenges the state of the art vision-based methods that have been used in more structured environments such as roads and highways. Apart from the lack of structure, the principal difficulty is the large spatial and temporal variations in the appearance of water in the presence of nearby vegetation and with reflections from the sky. We propose a self-supervised method to segment images into ‘sky’, ‘river’ and ‘shore’ (vegetation + structures) regions. Our approach uses assumptions about river scene structure to learn appearance models based on features like color, texture and image location which are used to segment the image. We validated our algorithm by testing on four datasets captured under varying conditions on different rivers. Our self-supervised algorithm had higher accuracy rates than a supervised alternative, often significantly more accurate, and does not need to be retrained to work under different conditions.


international conference on robotics and automation | 2006

Extending the dynamic range of robotic vision

Stephen Nuske; Jonathan M. Roberts; Gordon Wyeth

Conventional cameras have limited dynamic range, and as a result vision-based robots cannot effectively view an environment made up of both sunny outdoor areas and darker indoor areas. This paper presents an approach to extend the effective dynamic range of a camera, achieved by changing the exposure level of the camera in real-time to form a sequence of images which collectively cover a wide range of radiance. Individual control algorithms for each image have been developed to maximize the viewable area across the sequence. Spatial discrepancies between images, caused by the moving robot, are improved by a real-time image registration process. The sequence is then combined by merging color and contour information. By integrating these techniques it becomes possible to operate a vision-based robot in wide radiance range scenes

Collaboration


Dive into the Stephen Nuske's collaboration.

Top Co-Authors

Avatar

Sanjiv Singh

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sebastian Scherer

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Supreeth Achar

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Luke Yoder

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sezal Jain

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Gordon Wyeth

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jonathan M. Roberts

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew Chambers

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Lyle Chamberlain

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge