Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul A. Beardsley is active.

Publication


Featured researches published by Paul A. Beardsley.


international conference on computer graphics and interactive techniques | 2010

High-quality single-shot capture of facial geometry

Thabo Beeler; Bernd Bickel; Paul A. Beardsley; Bob Sumner; Markus H. Gross

This paper describes a passive stereo system for capturing the 3D geometry of a face in a single-shot under standard light sources. The system is low-cost and easy to deploy. Results are submillimeter accurate and commensurate with those from state-of-the-art systems based on active lighting, and the models meet the quality requirements of a demanding domain like the movie industry. Recovered models are shown for captures from both high-end cameras in a studio setting and from a consumer binocular-stereo camera, demonstrating scalability across a spectrum of camera deployments, and showing the potential for 3D face modeling to move beyond the professional arena and into the emerging consumer market in stereoscopic photography. Our primary technical contribution is a modification of standard stereo refinement methods to capture pore-scale geometry, using a qualitative approach that produces visually realistic results. The second technical contribution is a calibration method suited to face capture systems. The systemic contribution includes multiple demonstrations of system robustness and quality. These include capture in a studio setup, capture off a consumer binocular-stereo camera, scanning of faces of varying gender and ethnicity and age, capture of highly-transient facial expression, and scanning a physical mask to provide ground-truth validation.


international conference on computer graphics and interactive techniques | 2011

High-quality passive facial performance capture using anchor frames

Thabo Beeler; Fabian Hahn; Derek Bradley; Bernd Bickel; Paul A. Beardsley; Craig Gotsman; Robert W. Sumner; Markus H. Gross

We present a new technique for passive and markerless facial performance capture based on anchor frames. Our method starts with high resolution per-frame geometry acquisition using state-of-the-art stereo reconstruction, and proceeds to establish a single triangle mesh that is propagated through the entire performance. Leveraging the fact that facial performances often contain repetitive subsequences, we identify anchor frames as those which contain similar facial expressions to a manually chosen reference expression. Anchor frames are automatically computed over one or even multiple performances. We introduce a robust image-space tracking method that computes pixel matches directly from the reference frame to all anchor frames, and thereby to the remaining frames in the sequence via sequential matching. This allows us to propagate one reconstructed frame to an entire sequence in parallel, in contrast to previous sequential methods. Our anchored reconstruction approach also limits tracker drift and robustly handles occlusions and motion blur. The parallel tracking and mesh propagation offer low computation times. Our technique will even automatically match anchor frames across different sequences captured on different occasions, propagating a single mesh to all performances.


The International Journal of Robotics Research | 2012

Image and animation display with multiple mobile robots

Javier Alonso-Mora; Andreas Breitenmoser; Martin Rufli; Roland Siegwart; Paul A. Beardsley

In this article we present a novel display that is created using a group of mobile robots. In contrast to traditional displays that are based on a fixed grid of pixels, such as a screen or a projection, this work describes a display in which each pixel is a mobile robot of controllable color. Pixels become mobile entities, and their positioning and motion are used to produce a novel experience. The system input is a single image or an animation created by an artist. The first stage is to generate physical goal configurations and robot colors to optimally represent the input imagery with the available number of robots. The run-time system includes goal assignment, path planning and local reciprocal collision avoidance, to guarantee smooth, fast and oscillation-free motion between images. The algorithms scale to very large robot swarms and extend to a wide range of robot kinematics. Experimental evaluation is done for two different physical swarms of size 14 and 50 differentially driven robots, and for simulations with 1,000 robot pixels.


international conference on computer graphics and interactive techniques | 2012

Coupled 3D reconstruction of sparse facial hair and skin

Thabo Beeler; Bernd Bickel; Gioacchino Noris; Paul A. Beardsley; Steve Marschner; Robert W. Sumner; Markus H. Gross

Although facial hair plays an important role in individual expression, facial-hair reconstruction is not addressed by current face-capture systems. Our research addresses this limitation with an algorithm that treats hair and skin surface capture together in a coupled fashion so that a high-quality representation of hair fibers as well as the underlying skin surface can be reconstructed. We propose a passive, camera-based system that is robust against arbitrary motion since all data is acquired within the time period of a single exposure. Our reconstruction algorithm detects and traces hairs in the captured images and reconstructs them in 3D using a multiview stereo approach. Our coupled skin-reconstruction algorithm uses information about the detected hairs to deliver a skin surface that lies underneath all hairs irrespective of occlusions. In dense regions like eyebrows, we employ a hair-synthesis method to create hair fibers that plausibly match the image data. We demonstrate our scanning system on a number of individuals and show that it can successfully reconstruct a variety of facial-hair styles together with the underlying skin surface.


Autonomous Robots | 2015

Collision avoidance for aerial vehicles in multi-agent scenarios

Javier Alonso-Mora; Tobias Naegeli; Roland Siegwart; Paul A. Beardsley

This article describes an investigation of local motion planning, or collision avoidance, for a set of decision-making agents navigating in 3D space. The method is applicable to agents which are heterogeneous in size, dynamics and aggressiveness. It builds on the concept of velocity obstacles (VO), which characterizes the set of trajectories that lead to a collision between interacting agents. Motion continuity constraints are satisfied by using a trajectory tracking controller and constraining the set of available local trajectories in an optimization. Collision-free motion is obtained by selecting a feasible trajectory from the VO’s complement, where reciprocity can also be encoded. Three algorithms for local motion planning are presented—(1) a centralized convex optimization in which a joint quadratic cost function is minimized subject to linear and quadratic constraints, (2) a distributed convex optimization derived from (1), and (3) a centralized non-convex optimization with binary variables in which the global optimum can be found, albeit at higher computational cost. A complete system integration is described and results are presented in experiments with up to four physical quadrotors flying in close proximity, and in experiments with two quadrotors avoiding a human.


international conference on robotics and automation | 2012

Reciprocal collision avoidance for multiple car-like robots

Javier Alonso-Mora; Andreas Breitenmoser; Paul A. Beardsley; Roland Siegwart

In this paper a method for distributed reciprocal collision avoidance among multiple non-holonomic robots with bike kinematics is presented. The proposed algorithm, bicycle reciprocal collision avoidance (B-ORCA), builds on the concept of optimal reciprocal collision avoidance (ORCA) for holonomic robots but furthermore guarantees collision-free motions under the kinematic constraints of car-like vehicles. The underlying principle of the B-ORCA algorithm applies more generally to other kinematic models, as it combines velocity obstacles with generic tracking control. The theoretical results on collision avoidance are validated by several simulation experiments between multiple car-like robots.


international conference on computer graphics and interactive techniques | 2013

Image-based reconstruction and synthesis of dense foliage

Derek Bradley; Derek Nowrouzezahrai; Paul A. Beardsley

Flora is an element in many computer-generated scenes. But trees, bushes and plants have complex geometry and appearance, and are difficult to model manually. One way to address this is to capture models directly from the real world. Existing techniques have focused on extracting macro structure such as the branching structure of trees, or the structure of broad-leaved plants with a relatively small number of surfaces. This paper presents a finer scale technique to demonstrate for the first time the processing of densely leaved foliage - computation of 3D structure, plus extraction of statistics for leaf shape and the configuration of neighboring leaves. Our method starts with a mesh of a single exemplar leaf of the target foliage. Using a small number of images, point cloud data is obtained from multi-view stereo, and the exemplar leaf mesh is fitted non-rigidly to the point cloud over several iterations. In addition, our method learns a statistical model of leaf shape and appearance during the reconstruction phase, and a model of the transformations between neighboring leaves. This information is useful in two ways - to augment and increase leaf density in reconstructions of captured foliage, and to synthesize new foliage that conforms to a user-specified layout and density. The result of our technique is a dense set of captured leaves with realistic appearance, and a method for leaf synthesis. Our approach excels at reconstructing plants and bushes that are primarily defined by dense leaves and is demonstrated with multiple examples.


international conference on robotics and automation | 2015

Gesture based human - Multi-robot swarm interaction and its application to an interactive display

Javier Alonso-Mora; S. Haegeli Lohaus; Philipp Leemann; Roland Siegwart; Paul A. Beardsley

A taxonomy for gesture-based interaction between a human and a group (swarm) of robots is described. Methods are classified into two categories. First, free-form interaction, where the robots are unconstrained in position and motion and the user can use deictic gestures to select subsets of robots and assign target goals and trajectories. Second, shape-constrained interaction, where the robots are in a configuration shape that can be modified by the user. In the later, the user controls a subset of meaningful degrees of freedom defining the overall shape instead of each robot directly. A multi-robot interactive display is described where a depth sensor is used to recognize human gesture, determining the commands sent to a group comprising tens of robots. Experimental results with a preliminary user study show the usability of the system.


intelligent robots and systems | 2013

Design and control of a spherical omnidirectional blimp

Michael Burri; L. Gasser; M. Kach; Matthias Krebs; S. Laube; Anton Ledergerber; Daniel M. Meier; R. Michaud; Lukas Mosimann; L. Muri; C. Ruch; Andreas Schaffner; N. Vuilliomenet; J. Weichart; Konrad Rudin; Stefan Leutenegger; Javier Alonso-Mora; Roland Siegwart; Paul A. Beardsley

This paper presents Skye, a novel blimp design. Skye is a helium-filled sphere of diameter 2.7m with a strong inelastic outer hull and an impermeable elastic inner hull. Four tetrahedrally-arranged actuation units (AU) are mounted on the hull for locomotion, with each AU having a thruster which can be rotated around a radial axis through the sphere center. This design provides redundant control in the six degrees of freedom of motion, and Skye is able to move omnidirectionally and to rotate around any axis. A multi-camera module is also mounted on the hull for capture of aerial imagery or live video stream according to an `eyeball concept - the camera module is not itself actuated, but the whole blimp is rotated in order to obtain a desired camera view. Skye is safe for use near people - the double hull minimizes the likelihood of rupture on an unwanted collision; the propellers are covered by grills to prevent accidental contact; and the blimp is near neutral buoyancy so that it makes only a light impact on contact and can be readily nudged away. The system is portable and deployable by a single operator - the electronics, AUs, and camera unit are mounted externally and are detachable from the hull during transport; operator control is via an intuitive touchpad interface. The motivating application is in entertainment robotics. Skye has a varied motion vocabulary such as swooping and bobbing, plus internal LEDs for visual effect. Computer vision enables interaction with an audience. Experimental results show dexterous maneuvers in indoor and outdoor environments, and non-dangerous impacts between the blimp and humans.


international conference on robotics and automation | 2014

Shared control of autonomous vehicles based on velocity space optimization

Javier Alonso-Mora; Pascal Gohl; Scott Watson; Roland Siegwart; Paul A. Beardsley

This paper presents a method for shared control of a vehicle. The driver commands a preferred velocity which is transformed into a collision-free local motion that respects the actuator constraints and allows for smooth and safe control. Collision-free local motions are achieved with an extension of velocity obstacles that takes into account dynamic constraints and a grid-based map representation. To limit the freedom of the driver, a global guidance trajectory can be included, which specifies the areas where the vehicle is allowed to drive in each time instance. The low computational complexity of the method makes it well suited for multi-agent settings and high update rates and both a centralized and a distributed algorithm are provided that allow for real-time control of tens of vehicles. Extensive experimental results with real robotic wheelchairs at relatively high speeds in tight scenarios are presented.

Collaboration


Dive into the Paul A. Beardsley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Javier Alonso-Mora

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernd Bickel

Institute of Science and Technology Austria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge