Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joel A. Hesch is active.

Publication


Featured researches published by Joel A. Hesch.


The International Journal of Robotics Research | 2014

Camera-IMU-based localization: Observability analysis and consistency improvement

Joel A. Hesch; Dimitrios G. Kottas; Sean L. Bowman; Stergios I. Roumeliotis

This work investigates the relationship between system observability properties and estimator inconsistency for a Vision-aided Inertial Navigation System (VINS). In particular, first we introduce a new methodology for determining the unobservable directions of nonlinear systems by factorizing the observability matrix according to the observable and unobservable modes. Subsequently, we apply this method to the VINS nonlinear model and determine its unobservable directions analytically. We leverage our analysis to improve the accuracy and consistency of linearized estimators applied to VINS. Our key findings are evaluated through extensive simulations and experimental validation on real-world data, demonstrating the superior accuracy and consistency of the proposed VINS framework compared to standard approaches.


international conference on computer vision | 2011

A Direct Least-Squares (DLS) method for PnP

Joel A. Hesch; Stergios I. Roumeliotis

In this work, we present a Direct Least-Squares (DLS) method for computing all solutions of the perspective-n-point camera pose determination (PnP) problem in the general case (n ≥ 3). Specifically, based on the camera measurement equations, we formulate a nonlinear least-squares cost function whose optimality conditions constitute a system of three third-order polynomials. Subsequently, we employ the multiplication matrix to determine all the roots of the system analytically, and hence all minima of the LS, without requiring iterations or an initial guess of the parameters. A key advantage of our method is scalability, since the order of the polynomial system that we solve is independent of the number of points. We compare the performance of our algorithm with the leading PnP approaches, both in simulation and experimentally, and demonstrate that DLS consistently achieves accuracy close to the Maximum-Likelihood Estimator (MLE).


IEEE Transactions on Robotics | 2014

Consistency Analysis and Improvement of Vision-aided Inertial Navigation

Joel A. Hesch; Dimitrios G. Kottas; Sean L. Bowman; Stergios I. Roumeliotis

In this paper, we study estimator inconsistency in vision-aided inertial navigation systems (VINS) from the standpoint of systems observability. We postulate that a leading cause of inconsistency is the gain of spurious information along unobservable directions, which results in smaller uncertainties, larger estimation errors, and divergence. We develop an observability constrained VINS (OC-VINS), which explicitly enforces the unobservable directions of the system, hence preventing spurious information gain and reducing inconsistency. This framework is applicable to several variants of the VINS problem such as visual simultaneous localization and mapping (V-SLAM), as well as visual-inertial odometry using the multi-state constraint Kalman filter (MSC-KF). Our analysis, along with the proposed method to reduce inconsistency, are extensively validated with simulation trials and real-world experimentation.


WAFR | 2013

Towards Consistent Vision-Aided Inertial Navigation

Joel A. Hesch; Dimitrios G. Kottas; Sean L. Bowman; Stergios I. Roumeliotis

In this paper, we study estimator inconsistency in Vision-aided Inertial Navigation Systems (VINS) from a standpoint of system observability. We postulate that a leading cause of inconsistency is the gain of spurious information along unobservable directions, resulting in smaller uncertainties, larger estimation errors, and possibly even divergence.We develop an Observability-Constrained VINS (OC-VINS), which explicitly enforces the unobservable directions of the system, hence preventing spurious information gain and reducing inconsistency. Our analysis, along with the proposed method for reducing inconsistency, are extensively validated with simulation trials and real-world experiments.


The International Journal of Robotics Research | 2010

Design and Analysis of a Portable Indoor Localization Aid for the Visually Impaired

Joel A. Hesch; Stergios I. Roumeliotis

In this paper, we present the design and analysis of a portable position and orientation (pose) estimation aid for the visually impaired. Our prototype navigation aid consists of a foot-mounted pedometer and a white cane-mounted sensing package, which comprises a three-axis gyroscope and a two-dimensional (2D) laser scanner. We introduce a two-layered estimator that tracks the 3D orientation of the white cane in the first layer, and the 2D position of the person holding the cane in the second layer. Our algorithm employs a known building map to update the person’s position, and exploits perpendicularity in the building layout as a 3D structural compass. We analytically study the observability properties of the linearized dynamical system, and we provide sufficient observability conditions. We evaluate the real-world performance of our localization aid, and demonstrate its reliability for accurate, real-time human localization.


robotics science and systems | 2015

Get Out of My Lab: Large-scale, Real-Time Visual-Inertial Localization

Simon Lynen; Torsten Sattler; Michael Bosse; Joel A. Hesch; Marc Pollefeys; Roland Siegwart

Accurately estimating a robots pose relative to a global scene model and precisely tracking the pose in real-time is a fundamental problem for navigation and obstacle avoidance tasks. Due to the computational complexity of localization against a large map and the memory consumed by the model, state-of-the-art approaches are either limited to small workspaces or rely on a server-side system to query the global model while tracking the pose locally. The latter approaches face the problem of smoothly integrating the servers pose estimates into the trajectory computed locally to avoid temporal discontinuities. In this paper, we demonstrate that large-scale, real-time pose estimation and tracking can be performed on mobile platforms with limited resources without the use of an external server. This is achieved by employing map and descriptor compression schemes as well as efficient search algorithms from computer vision. We derive a formulation for integrating the global pose information into a local state estimator that produces much smoother trajectories than current approaches. Through detailed experiments, we evaluate each of our design choices individually and document its impact on the overall system performance, demonstrating that our approach outperforms state-of-the-art algorithms for localization at scale.


international symposium on experimental robotics | 2013

On the Consistency of Vision-Aided Inertial Navigation

Dimitrios G. Kottas; Joel A. Hesch; Sean L. Bowman; Stergios I. Roumeliotis

In this paper, we study estimator inconsistency in Vision-aided Inertial Navigation Systems (VINS). We show that standard (linearized) estimation approaches, such as the Extended Kalman Filter (EKF), can fundamentally alter the system observability properties, in terms of the number and structure of the unobservable directions. This in turn allows the influx of spurious information, leading to inconsistency. To address this issue, we propose an Observability-Constrained VINS (OC-VINS) methodology that explicitly adheres to the observability properties of the true system.We apply our approach to the Multi-State Constraint Kalman Filter (MSC-KF), and provide both simulation and experimental validation of the effectiveness of our method for improving estimator consistency.


international conference on robotics and automation | 2007

An Indoor Localization Aid for the Visually Impaired

Joel A. Hesch; Stergios I. Roumeliotis

This paper presents an indoor human localization system for the visually impaired. A prototype portable device has been implemented, consisting of a pedometer and a standard white cane, on which a laser range finder and a 3-axis gyroscope have been mounted. A novel pose estimation algorithm has been developed for robustly estimating the heading and position of a person navigating in a known building. The basis of our estimation scheme is a two-layered extended Kalman filter (EKF) for attitude and position estimation. The first layer maintains an attitude estimate of the white cane, which is subsequently provided to the second layer where a position estimate of the user is generated. Experimental results are presented that demonstrate the reliability of the proposed method for accurate, real-time human localization.


international conference on robotics and automation | 2010

A Laser-Aided Inertial Navigation System (L-INS) for human localization in unknown indoor environments

Joel A. Hesch; Faraz M. Mirzaei; Gian Luca Mariottini; Stergios I. Roumeliotis

This paper presents a novel 3D indoor Laser-aided Inertial Navigation System (L-INS) for the visually impaired. An Extended Kalman Filter (EKF) fuses information from an Inertial Measurement Unit (IMU) and a 2D laser scanner, to concurrently estimate the six degree-of-freedom (d.o.f.) position and orientation (pose) of the person and a 3D map of the environment. The IMU measurements are integrated to obtain pose estimates, which are subsequently corrected using line-to-plane correspondences between linear segments in the laser-scan data and orthogonal structural planes of the building. Exploiting the orthogonal building planes ensures fast and efficient initialization and estimation of the map features while providing human-interpretable layout of the environment. The L-INS is experimentally validated by a person traversing a multistory building, and the results demonstrate the reliability and accuracy of the proposed method for indoor localization and mapping.


advanced video and signal based surveillance | 2010

Tracking People with a 360-Degree Lidar

John Shackleton; Brian VanVoorst; Joel A. Hesch

Advances in lidar technology, in particular 360-degreelidar sensors, create new opportunities to augment andimprove traditional surveillance systems. This paperdescribes an initial challenge to use a single stationary360-degree lidar sensor to detect and track people movingthroughout a scene in real-time. The depicted approachfocuses on overcoming three primary challenges inherentin any lidar tracker: classification and matching errorsbetween multiple human targets, segmentation errorsbetween humans and fixed objects in the scene, andsegmentation errors between targets that are very closetogether.

Collaboration


Dive into the Joel A. Hesch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sean L. Bowman

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gian Luca Mariottini

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Chao Qu

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge