Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julian Ryde is active.

Publication


Featured researches published by Julian Ryde.


international conference on robotics and automation | 2013

Ascending stairway modeling from dense depth imagery for traversability analysis

Jeffrey A. Delmerico; David Baran; Philip David; Julian Ryde; Jason J. Corso

Localization and modeling of stairways by mobile robots can enable multi-floor exploration for those platforms capable of stair traversal. Existing approaches focus on either stairway detection or traversal, but do not address these problems in the context of path planning for the autonomous exploration of multi-floor buildings. We propose a system for detecting and modeling ascending stairways while performing simultaneous localization and mapping, such that the traversability of each stairway can be assessed by estimating its physical properties. The long-term objective of our approach is to enable exploration of multiple floors of a building by allowing stairways to be considered during path planning as traversable portals to new frontiers. We design a generative model of a stairway as a single object. We localize these models with respect to the map, and estimate the dimensions of the stairway as a whole, as well as its steps. With these estimates, a robot can determine if the stairway is traversable based on its climbing capabilities. Our system consists of two parts: a computationally efficient detector that leverages geometric cues from dense depth imagery to detect sets of ascending stairs, and a stairway modeler that uses multiple detections to infer the location and parameters of a stairway that is discovered during exploration. We demonstrate the performance of this system when deployed on several mobile platforms using a Microsoft Kinect sensor.


international conference on robotics and automation | 2011

Alignment and 3D scene change detection for segmentation in autonomous earth moving

Julian Ryde; Nick Hillier

The tasks of region or object segmentation and environment change detection in a 3D context are investigated and tested on an autonomous skid steer loader. This is achieved through a technique analogous to background subtraction utilising 3D scan data which is first aligned before a voxel subtraction operation against a prior map. We highlight the close relationships between the scan-to-map alignment, background subtraction and 3D scan-to-map matching problems.


robotics science and systems | 2012

Estimating Human Dynamics On-the-fly Using Monocular Video For Pose Estimation

Priyanshu Agarwal; Suren Kumar; Julian Ryde; Jason J. Corso; Venkat Krovi

Human pose estimation using uncalibrated monocular visual inputs alone is a challenging problem for both the computer vision and robotics communities. From the robotics perspective, the challenge here is one of pose estimation of a multiply-articulated system of bodies using a single nonspecialized environmental sensor (the camera) and thereby, creating low-order surrogate computational models for analysis and control. In this work, we propose a technique for estimating the lowerlimb dynamics of a human solely based on captured behavior using an uncalibrated monocular video camera. We leverage our previously developed framework for human pose estimation to (i) deduce the correct sequence of temporally coherent gap-filled pose estimates, (ii) estimate physical parameters, employing a dynamics model incorporating the anthropometric constraints, and (iii) filter out the optimized gap-filled pose estimates, using an Unscented Kalman Filter (UKF) with the estimated dynamicallyequivalent human dynamics model. We test the framework on videos from the publicly available DARPA Mind’s Eye Year 1 corpus [8]. The combined estimation and filtering framework not only results in more accurate physically plausible pose estimates, but also provides pose estimates for frames, where the original human pose estimation framework failed to provide one.


intelligent robots and systems | 2012

Fast voxel maps with counting bloom filters

Julian Ryde; Jason J. Corso

In order to achieve good and timely volumetric mapping for mobile robots, we improve the speed and accuracy of multi-resolution voxel map building from 3D data. Mobile robot capabilities, such as SLAM and path planning, often involve algorithms that query a map many times and this lookup is often the bottleneck limiting the execution speed. As such, fast spatial proximity queries has been the topic of much active research. Various data structures have been researched including octrees, k-d trees, approximate nearest neighbours and even dense 3D arrays. We tackle this problem by extending previous work that stores the map as a hash table containing occupied voxels at multiple resolutions. We apply Bloom filters to the problem of spatial querying and voxel maps for the example application of SLAM. Their efficacy is demonstrated building 3D maps with both simulated and real 3D point cloud data. Looking up whether a voxel is occupied is three times faster than the hash table and within 10% of the speed of querying a dense 3D array, potentially the upper limit to query speed. Map generation was done with scan to map alignment on simulated depth images, for which the true pose is available. The calculated poses exhibited sub-voxel error of 0.02m and 0.3 degrees for a typical indoor scene with a map resolution of 0.04m.


international symposium on visual computing | 2012

An Optimization Based Framework for Human Pose Estimation in Monocular Videos

Priyanshu Agarwal; Suren Kumar; Julian Ryde; Jason J. Corso; Venkat Krovi

Human pose estimation using monocular vision is a challenging problem in computer vision. Past work has focused on developing efficient inference algorithms and probabilistic prior models based on captured kinematic/dynamic measurements. However, such algorithms face challenges in generalization beyond the learned dataset.


intelligent robots and systems | 2013

Voxel planes: Rapid visualization and meshification of point cloud ensembles

Julian Ryde; Vikas Dhiman; Robert Platt

Conversion of unorganized point clouds to surface reconstructions is increasingly required in the mobile robotics perception processing pipeline, particularly with the rapid adoption of RGB-D (color and depth) image sensors. Many contemporary methods stem from the work in the computer graphics community in order to handle the point clouds generated by tabletop scanners in a batch-like manner. The requirements for mobile robotics are different and include support for real-time processing, incremental update, localization, mapping, path planning, obstacle avoidance, ray-tracing, terrain traversability assessment, grasping/manipulation and visualization for effective human-robot interaction. We carry out a quantitative comparison of Greedy Projection and Marching cubes along with our voxel planes method. The execution speed, error, compression and visualization appearance of these are assessed. Our voxel planes approach first computes the PCA over the points inside a voxel, combining these PCA results across 2×2×2 voxel neighborhoods in a sliding window. Second, the smallest eigenvector and voxel centroid define a plane which is intersected with the voxel to reconstruct the surface patch (3-6 sided convex polygon) within that voxel. By nature of their construction these surface patches tessellate to produce a surface representation of the underlying points. In experiments on public datasets the voxel planes method is 3 times faster than marching cubes, offers 300 times better compression than Greedy Projection, 10 fold lower error than marching cubes whilst allowing incremental map updates.


intelligent robots and systems | 2013

Mutual localization: Two camera relative 6-DOF pose estimation from reciprocal fiducial observation

Vikas Dhiman; Julian Ryde; Jason J. Corso

Concurrently estimating the 6-DOF pose of multiple cameras or robots - cooperative localization - is a core problem in contemporary robotics. Current works focus on a set of mutually observable world landmarks and often require inbuilt egomotion estimates; situations in which both assumptions are violated often arise, for example, robots with erroneous low quality odometry and IMU exploring an unknown environment. In contrast to these existing works in cooperative localization, we propose a cooperative localization method, which we call mutual localization, that uses reciprocal observations of camera-fiducials to obviate the need for egomotion estimates and mutually observable world landmarks. We formulate and solve an algebraic formulation for the pose of the two camera mutual localization setup under these assumptions. Our experiments demonstrate the capabilities of our proposal egomotion-free cooperative localization method: for example, the method achieves 2cm range and 0.7 degree accuracy at 2m sensing for 6-DOF pose. To demonstrate the applicability of the proposed work, we deploy our method on Turtlebots and we compare our results with ARToolKit [1] and Bundler [2], over which our method achieves a tenfold improvement in translation estimation accuracy.


intelligent robots and systems | 2012

Ascending stairway modeling: A first step toward autonomous multi-floor exploration

Jeffrey A. Delmerico; Jason J. Corso; David Baran; Philip David; Julian Ryde

Many robotics platforms are capable of ascending stairways, but all existing approaches for autonomous stair climbing use stairway detection as a trigger for immediate traversal. In the broader context of autonomous exploration, the ability to travel between floors of a building should be compatible with path planning, such that the robot can traverse a stairway at a time that is appropriate to its navigation goals. No system yet presented is capable of both localizing stairways on a map and estimating their properties, functions that in combination would enable stairways to be considered as traversable terrain in a path planning algorithm. We propose a method for modeling stairways as objects and localizing them on a map, such that they can be subsequently traversed if they are of dimensions that the robotic platform is capable of climbing. Our system consists of two parts: a computationally efficient detector that leverages geometric cues from depth imagery to detect sets of ascending stairs, and a stairway modeler that uses multiple detections to infer the location and parameters of a stairway that is discovered during exploration. This video demonstrates the performance of the system in a number of real-world situations, modeling and localizing a variety of stairway types in both indoor and outdoor environments.


IFAC Proceedings Volumes | 2011

Experiments in Autonomous Earth Moving

Adrian Bonchis; Nicholas Hillier; Julian Ryde; Elliot S. Duff; Cédric Pradalier

Abstract This paper presents a technology demonstrator currently under development and describes experiments carried out to date in autonomous bulk material handling using mobile equipment. Our primary platform is a Bobcat S185 skid-steer loader instrumented with an onboard computer, a sensor suite, and a communication link that support various levels of automation, from remote control to supervised autonomy. We present the main system components and discuss the autonomous cleaning of spillage and carryback, common bulk handling task in mining, currently executed exclusively using manually and/or remotely operated loaders. The system architecture is based on Spring, a Robotics Software Framework developed by CSIRO to support rapid development of new robotic systems, distributed as an Open Source package.


IEEE-ASME Transactions on Mechatronics | 2014

Estimating Dynamics On-the-Fly Using Monocular Video For Vision-Based Robotics

Priyanshu Agarwal; Suren Kumar; Julian Ryde; Jason J. Corso; Venkat Krovi

Estimating the physical parameters of articulated multibody systems (AMBSs) using an uncalibrated monocular camera poses significant challenges for vision-based robotics. Articulated multibody models, especially ones including dynamics, have shown good performance for pose tracking, but require good estimates of system parameters. In this paper, we first propose a technique for estimating parameters of a dynamically equivalent model (kinematic/geometric lengths as well as mass, inertia, damping coefficients) given only the underlying articulated model topology. The estimated dynamically equivalent model is then employed to help predict/filter/gap-fill the raw pose estimates, using an unscented Kalman filter. The framework is tested initially on videos of a relatively simple AMBS (double pendulum in a structured laboratory environment). The double pendulum not only served as a surrogate model for the human lower limb in flight phase, but also helped evaluate the role of model fidelity. The treatment is then extended to realize physically plausible pose-estimates of human lower-limb motions, in more-complex uncalibrated monocular videos (from the publicly available DARPA Minds Eye Year 1 corpus). Beyond the immediate problem-at-hand, the presented work has applications in creation of low-order surrogate computational dynamics models for analysis, control, and tracking of many other articulated multibody robotic systems (e.g., manipulators, humanoids) using vision.

Collaboration


Dive into the Julian Ryde's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Priyanshu Agarwal

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrian Bonchis

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Elliot S. Duff

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Nicholas Hillier

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Researchain Logo
Decentralizing Knowledge