Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin M. Brink is active.

Publication


Featured researches published by Kevin M. Brink.


Journal of Guidance Control and Dynamics | 2017

Partial-Update Schmidt–Kalman Filter

Kevin M. Brink

The Schmidt–Kalman (or “consider” Kalman filter) has often been used to account for the uncertainty in so-called “nuisance” parameters when they are impactful to filter accuracy and consistency. Usually such nuisance parameters are errors in environment or sensor models or other static biases where actively estimating their value is not required. However, there are times that it is desired or necessary to estimate the nuisance terms themselves. This paper introduces an intermittent form of the Schmidt–Kalman filter, where (within the same filter) nuisance terms are sometimes treated as full filter states and estimated, and other times they are only considered. Similarly, more generic partial-update forms of the Schmidt–Kalman filter are introduced, where only a portion of the traditional full filter update is applied to select states. These modifications extend the Schmidt filter concept for use on problematic static biases and even time-varying states, allowing them to be estimated while still maintainin...


ieee/ion position, location and navigation symposium | 2016

Real-time RGBD odometry for fused-state navigation systems

Andrew R. Willis; Kevin M. Brink

This article describes an algorithm that provides visual odometry estimates from sequential pairs of RGBD images. The key contribution of this article on RGBD odometry is that it provides both an odometry estimate and a covariance for the odometry parameters in real-time via a representative covariance matrix. Accurate, real-time parameter covariance is essential to effectively fuse odometry measurements into most navigation systems. To date, this topic has seen little treatment in research which limits the impact existing RGBD odometry approaches have for localization in these systems. Covariance estimates are obtained via a statistical perturbation approach motivated by real-world models of RGBD sensor measurement noise. Results discuss the accuracy of our RGBD odometry approach with respect to ground truth obtained from a motion capture system and characterizes the suitability of this approach for estimating the true RGBD odometry parameter uncertainty.


Journal of Guidance Control and Dynamics | 2016

Decentralized Cooperative Control Methods for the Modified Weapon–Target Assignment Problem

Kyle Volle; Jonathan Rogers; Kevin M. Brink

Weapon–target assignment is a combinatorial optimization problem in which a set of weapons must selectively engage a set of targets. In its decentralized form, it is also an important problem in autonomous multi-agent robotics. In this work, decentralized methods are explored for a modified weapon–target assignment problem in which weapons seek to achieve a prespecified probability of kill on each target. Three novel cost functions are proposed that, in cases with low agent-to-target ratios, induce behaviors that may be preferable to the behaviors induced by classical cost functions. The performance of these proposed cost functions is explored in simulation of both homogeneous and heterogeneous engagement scenarios using airborne autonomous weapons. Simulation results demonstrate that the proposed cost functions achieve desired behaviors in cases with low agent-to-target ratios where efficient use of weapons is particularly important.


ieee ion position location and navigation symposium | 2012

Filter-based calibration for an IMU and multi-camera system

Kevin M. Brink; Andrey Soloviev

Vision-aided Inertial Navigation Systems (vINS) are capable of providing accurate six degree of freedom (6DoF) state estimation for autonomous vehicles (AVs) in the absence of Global Positioning System (GPS) and other global references. Features observed by a camera can be combined with measurements from an inertial measurement unit (IMU) in a filter to estimate the desired vehicle states. To do so, the rigid body transformation between cameras and the IMU must be known with high precision. Extended Kalman filters (EKF) and Unscented Kalman filters (UKF) have been used to calibrate camera and IMU systems requiring only a simple calibration target and moderate IMU-camera motion. This paper focuses on indoor applications where it is assumed a user is able to easily manipulate the sensor package. We extend the UKF filter to calibrate an IMU paired with an arbitrary number of cameras, with or without overlapping fields of view.


southeastcon | 2017

Linear depth reconstruction for RGBD sensors

Andrew R. Willis; John Papadakis; Kevin M. Brink

Consumer-level depth cameras, referred to as RGBD devices, are important components to robotic recognition, mapping and navigation systems. Past research has provided detailed models for measurement and quantization noise inherent to these devices. Yet, to date, there has been no published work showing how to leverage these noise models to reduce the measured depth error. Existing approaches rely on colored images registered to the depth image to reconstruct depth, which work best when the device is calibrated and the scene lighting and surfaces allow for a Lambertian model. The proposed method is attractive since it works directly on the depth data without any need for calibration or assumptions regarding the scene. Reconstruction is accomplished using a two stage filter. The first stage removes impulse, i.e., “salt-and-pepper”, noise and adds noise, i.e., dithers, the depth at quantization boundaries. The second stage low-pass filters the depth to remove the added dithering noise. The dithering process is particularly useful for quickly removing large errors in depth at the extreme of the device range where the depth quantization and impulse noise incurs significant error. The proposed reconstruction approach has linear computational complexity and low computational cost. The algorithm is particularly useful for extracting smooth surfaces at the upper limits of the sensor measurement range where impulse noise and quantization errors are large (∼7.5cm) and can significantly degrade the performance of downstream recognition, navigation, and mapping algorithms.


Proceedings of SPIE | 2017

Distributed subterranean exploration and mapping with teams of UAVs

John G. Rogers; Ryan E. Sherrill; Arthur Schang; Shava L. Meadows; Eric P. Cox; Brendan Byrne; David Baran; J. Willard Curtis; Kevin M. Brink

Teams of small autonomous UAVs can be used to map and explore unknown environments which are inaccessible to teams of human operators in humanitarian assistance and disaster relief efforts (HA/DR). In addition to HA/DR applications, teams of small autonomous UAVs can enhance Warfighter capabilities and provide operational stand-off for military operations such as cordon and search, counter-WMD, and other intelligence, surveillance, and reconnaissance (ISR) operations. This paper will present a hardware platform and software architecture to enable distributed teams of heterogeneous UAVs to navigate, explore, and coordinate their activities to accomplish a search task in a previously unknown environment.


international conference on unmanned aircraft systems | 2016

Multi-sensor robust relative estimation framework for GPS-denied multirotor aircraft

Daniel P. Koch; Timothy W. McLain; Kevin M. Brink

An estimation framework is presented that improves the robustness of GPS-denied state estimation to changing environmental conditions by fusing updates from multiple view-based odometry algorithms. This allows the vehicle to utilize a suite of complementary exteroceptive sensors or sensing modalities. By estimating the vehicle states relative to a local coordinate frame collocated with an odometry keyframe, observability of the relative state is maintained. A description of the general framework is given, as well as the specific equations for a multiplicative extended Kalman filter with a multirotor vehicle. Experimental results are presented that demonstrate the ability of the proposed algorithm to produce accurate and consistent estimates in challenging environments that cause a single-sensor solution to fail.


Three-Dimensional Imaging, Visualization, and Display 2016 | 2016

Benchmarking real-time RGBD odometry for light-duty UAVs

Andrew R. Willis; Laith R. Sahawneh; Kevin M. Brink

This article describes the theoretical and implementation challenges associated with generating 3D odometry estimates (delta-pose) from RGBD sensor data in real-time to facilitate navigation in cluttered indoor environments. The underlying odometry algorithm applies to general 6DoF motion; however, the computational platforms, trajectories, and scene content are motivated by their intended use on indoor, light-duty UAVs. Discussion outlines the overall software pipeline for sensor processing and details how algorithm choices for the underlying feature detection and correspondence computation impact the real-time performance and accuracy of the estimated odometry and associated covariance. This article also explores the consistency of odometry covariance estimates and the correlation between successive odometry estimates. The analysis is intended to provide users information needed to better leverage RGBD odometry within the constraints of their systems.


Three-Dimensional Imaging, Visualization, and Display 2016 | 2016

Real-time geometric scene estimation for RGBD images using a 3D box shape grammar

Andrew R. Willis; Kevin M. Brink

This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative “Block World” perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.


Three-Dimensional Imaging, Visualization, and Display 2016 | 2016

iGRaND: an invariant frame for RGBD sensor feature detection and descriptor extraction with applications

Andrew R. Willis; Kevin M. Brink

This article describes a new 3D RGBD image feature, referred to as iGRaND, for use in real-time systems that use these sensors for tracking, motion capture, or robotic vision applications. iGRaND features use a novel local reference frame derived from the image gradient and depth normal (hence iGRaND) that is invariant to scale and viewpoint for Lambertian surfaces. Using this reference frame, Euclidean invariant feature components are computed at keypoints which fuse local geometric shape information with surface appearance information. The performance of the feature for real-time odometry is analyzed and its computational complexity and accuracy is compared with leading alternative 3D features.

Collaboration


Dive into the Kevin M. Brink's collaboration.

Top Co-Authors

Avatar

Andrew R. Willis

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam J. Rutkowski

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel P. Koch

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar

Jonathan Rogers

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kyle Volle

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge