Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philipp Michel is active.

Publication


Featured researches published by Philipp Michel.


ieee-ras international conference on humanoid robots | 2005

Vision-guided humanoid footstep planning for dynamic environments

Philipp Michel; Joel E. Chestnutt; James J. Kuffner; Takeo Kanade

Despite the stable walking capabilities of modern biped humanoid robots, their ability to autonomously and safely navigate obstacle-filled, unpredictable environments has so far been limited. We present an approach to autonomous humanoid walking that combines vision-based sensing with a footstep planner, allowing the robot to navigate toward a desired goal position while avoiding obstacles. An environment map including the robot, goal, and obstacle locations is built in real-time from vision. The footstep planner then computes an optimal sequence of footstep locations within a time-limited planning horizon. Footstep plans are reused and only partially recomputed as the environment changes during the walking sequence. In our experiments, combining real-time vision with plan reuse has allowed a Honda ASIMO humanoid robot to autonomously traverse dynamic environments containing unpredictably moving obstacles


intelligent robots and systems | 2007

GPU-accelerated real-time 3D tracking for humanoid locomotion and stair climbing

Philipp Michel; J. Chestnut; Satoshi Kagami; Koichi Nishiwaki; James J. Kuffner; Takeo Kanade

For humanoid robots to fully realize their biped potential in a three-dimensional world and step over, around or onto obstacles such as stairs, appropriate and efficient approaches to execution, planning and perception are required. To this end, we have accelerated a robust model-based three-dimensional tracking system by programmable graphics hardware to operate online at frame-rate during locomotion of a humanoid robot. The tracker recovers the full 6 degree-of- freedom pose of viewable objects relative to the robot. Leveraging the computational resources of the GPU for perception has enabled us to increase our trackers robustness to the significant camera displacement and camera shake typically encountered during humanoid navigation. We have combined our approach with a footstep planner and a controller capable of adaptively adjusting the height of swing leg trajectories. The resulting integrated perception-planning-action system has allowed an HRP-2 humanoid robot to successfully and rapidly localize, approach and climb stairs, as well as to avoid obstacles during walking.


international conference on robotics and automation | 2006

Online environment reconstruction for biped navigation

Philipp Michel; Joel E. Chestnutt; Satoshi Kagami; Koichi Nishiwaki; James J. Kuffner; Takeo Kanade

As navigation autonomy becomes an increasingly important research topic for biped humanoid robots, efficient approaches to perception and mapping that are suited to the unique characteristics of humanoids and their typical operating environments are required. This paper presents a system for online environment reconstruction that utilizes both external sensors for global localization, and on-body sensors for detailed local mapping. An external optical motion capture system is used to accurately localize on-board sensors that integrate successive 2D views of a calibrated camera and range measurements from a SwissRanger SR-2 time-of-flight sensor to construct global environment maps in real-time. Environment obstacle geometry is encoded in 2D occupancy grids and 2.5D height maps for navigation planning. We present an on-body implementation for the HRP-2 humanoid robot that, combined with a footstep planner, enables the robot to autonomously traverse dynamic environments containing unpredictably moving obstacles


intelligent robots and systems | 2004

Motion-based robotic self-recognition

Philipp Michel; Kevin Gold; Brian Scassellati

We present a method for allowing a humanoid robot to recognize its own motion in its visual field, thus enabling it to distinguish itself from other agents in the vicinity. Our approach consists of learning a characteristic time window between the initiation of motor movement and the perception of arm motions. The method has been implemented and evaluated on an infant humanoid platform. Our results demonstrate the effectiveness of using the delayed temporal contingency in the action-perception loop as a basis for simple self-other discrimination. We conclude by suggesting potential applications in social robotics and in generating forward models of motion.


robot and human interactive communication | 2006

Roillo: Creating a Social Robot for Playrooms

Marek P. Michalowski; Selma Sabanovic; Philipp Michel

In this paper, we introduce Roillo, a social robotic platform for investigating, in the context of childrens playrooms, questions about how to design compelling nonverbal interactive behaviors for social robots. Specifically, we are interested in the importance of rhythm to natural interactions and its role in the expression of affect, attention, and intent. Our design process has consisted of rendering, animation, surveys, mechanical prototyping, and puppeteered interaction with children


intelligent robots and systems | 2007

Locomotion among dynamic obstacles for the honda ASIMO

Joel E. Chestnutt; Philipp Michel; James J. Kuffner; Takeo Kanade

We have equipped a Honda ASIMO humanoid with the ability to navigate autonomously in obstacle-filled environments. In addition to finding its way through known, fixed obstacle configurations, the planning system can reason about the future state of the world to locomote through challenging environments when the obstacle motions can be inferred from observation. This video presents work using a vision system to predict the velocities of objects in the scene, allowing ASIMO to safely navigate autonomously through a dynamic environment. Neither obstacle positions nor velocities are known at the start of the trial, but are estimated online as the robot walks. The planner constantly adjusts the footstep path with the latest estimates of ASIMOs position and the obstacle trajectories, allowing the robot to successfully circumnavigate the moving obstacles.


ISRR | 2007

Humanoid HRP2-DHRC for Autonomous and Interactive Behavior

Satoshi Kagami; Koichi Nishiwaki; James J. Kuffner; Simon Thompson; Joel E. Chestnutt; Mike Stilman; Philipp Michel

Recently, research on humanoid-type robots has become increasingly active, and a broad array of fundamental issues are under investigation. However, in order to achieve a humanoid robot which can operate in human environments, not only the fundamental components themselves, but also the successful integration of these components will be required. At present, almost all humanoid robots that have been developed have been designed for bipedal locomotion experiments. In order to satisfy the functional demands of locomotion as well as high-level behaviors, humanoid robots require good mechanical design, hardware, and software which can support the integration of tactile sensing, visual perception, and motor control. Autonomous behaviors are currently still very primitive for humanoid-type robots. It is difficult to conduct research on high-level autonomy and intelligence in humanoids due to the development and maintenance costs of the hardware. We believe low-level autonomous functions will be required in order to conduct research on higher-level autonomous behaviors for humanoids.


ieee-ras international conference on humanoid robots | 2008

Humanoid navigation planning using future perceptive capability

Philipp Michel; Joel E. Chestnutt; Satoshi Kagami; Koichi Nishiwaki; James J. Kuffner; Takeo Kanade

We present an approach to navigation planning for humanoid robots that aims to ensure reliable execution by augmenting the planning process to reason about the robotpsilas ability to successfully perceive its environment during operation. By efficiently simulating the robotpsilas perception system during search, our planner generates a metric, the so-called perceptive capability, that quantifies the dasiasensabilitypsila of the environment in each state given the task to be accomplished. We have applied our method to the problem of planning robust autonomous walking sequences as performed by an HRP-2 humanoid. A fast GPU-accelerated 3D tracker is used for perception, with a footstep planner incorporating reasoning about the robotpsilas perceptive capability. When combined with a controller capable of adaptively adjusting the height of swing leg trajectories, HRP-2 is able to navigate around obstacles and climb stairs in dynamically changing environments. Reasoning about the future perceptive capability ensures that sensing remains operational throughout the walking sequence and yields higher task success rates than perception-unaware planning.


The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) | 2008

2P1-G09 GPU-accelerated Real-Time 3D Tracking for Humanoid Autonomy

Philipp Michel; Joel E. Chestnutt; Satoshi Kagami; Koichi Nishiwaki; James J. Kuffner; Takeo Kanade

We have accelerated a robust model-based 3D tracking system by programmable graphics hardware to run online at frame-rate during operation of a humanoid robot and to efficiently auto-initialize. The tracker recovers the full 6 degree-of-freedom pose of viewable objects relative to the robot. Leveraging the computational resources of the GPU for perception has enabled us to increase our tracker’s robustness to the significant camera displacement and camera shake typically encountered during humanoid navigation. We have combined our approach with a footstep planner and a controller capable of adaptively adjusting the height of swing leg trajectories. The resulting integrated perception-planning-action system has allowed an HRP-2 humanoid robot to successfully and rapidly localize, approach and climb stairs, as well as to avoid obstacles during walking.


international conference on multimodal interfaces | 2003

Real time facial expression recognition in video using support vector machines

Philipp Michel; Rana El Kaliouby

Collaboration


Dive into the Philipp Michel's collaboration.

Top Co-Authors

Avatar

Joel E. Chestnutt

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Satoshi Kagami

Tokyo University of Science

View shared research outputs
Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Koichi Nishiwaki

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Koichi Nishiwaki

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Mike Stilman

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Chestnut

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge