Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jochen Heinzmann is active.

Publication


Featured researches published by Jochen Heinzmann.


The International Journal of Robotics Research | 2003

Quantitative Safety Guarantees for Physical Human-Robot Interaction

Jochen Heinzmann; Alexander Zelinsky

If robots are to be introduced into the human world as assistants to aid a person in the completion of a manual task two key problems of todays robots must be solved. The human-robot interface must be intuitive to use and the safety of the user with respect to injuries inflicted by collisions with the robot must be guaranteed. In this paper we describe the formulation and implementation of a control strategy for robot manipulators which provides quantitative safety guarantees for the user of assistant-type robots. We propose a control scheme for robot manipulators that restricts the torque commands of a position control algorithm to values that comply to preset safety restrictions. These safety restrictions limit the potential impact force of the robot in the case of a collision with a person. Such accidental collisions may occur with any part of the robot and therefore the impact force not only of the robots hand but of all surfaces is controlled by the scheme. The integration of a visual control interface and the safely con trolled robot allows the safe and intuitive interaction between a per son and the robot. As an example application, the system is pro grammed to retrieve eye-gaze-selected objects from a table and to hand them over to the user on demand.


ieee international conference on automatic face and gesture recognition | 1998

3-D facial pose and gaze point estimation using a robust real-time tracking paradigm

Jochen Heinzmann; Alexander Zelinsky

Facial pose and gaze point are fundamental to any visually directed human-machine interface. In this paper we propose a system capable of tracking a face and estimating the 3-D pose and the gaze point all in a real-time video stream of the head. This is done by using a 3-D model together with multiple triplet triangulation of feature positions assuming an affine projection. Using feature-based tracking the calculation of a 3-D eye gaze direction vector is possible even with head rotation and using a monocular camera. The system is also able to automatically initialise the feature tracking and to recover from total tracking failures which can occur when a person becomes occluded or temporarily leaves the image.


Journal of Intelligent and Robotic Systems | 1999

A Safe-Control Paradigm for Human–Robot Interaction

Jochen Heinzmann; Alexander Zelinsky

This paper introduces a new approach to control a robot manipulator in a way that is safe for humans in the robots workspace. Conceptually the robot is viewed as a tool with limited autonomy. The limited perception capabilities of automatic systems prohibits the construction of failsafe robots of the capability of people Instead, the goal of our control paradigm is to make the interaction with a robot manipulator safe by making the robots actions predictable and understandable to the human operator. At the same time the forces the robot applies with any part of its body to its environment have to be controllable and limited. Experimental results are presented of a human-friendly robot controller that is under development for a Barrett Whole Arm Manipulator robot.


Archive | 2000

Building Human-Friendly Robot Systems

Jochen Heinzmann; Alexander Zelinsky

To develop human friendly robots we required two key components; smart interfaces and safe mechanisms. Smart interfaces facilitate natural and easy interfaces for human-robot interaction. Facial gestures can be a natural way to control a robot. In this paper, we report on a vision-based interface that in real-time tracks a user’s facial features and gaze point. Human friendly robots must also have high integrity safety systems that ensure that people are never harmed. To guarantee human safety we require manipulator mechanisms in which all actuators are force controlled in a manner that prevents dangerous impacts with people and the environment. In this paper we present a control scheme for a whole arm manipulator (WAM) which allows for safe human-robot interaction.


intelligent robots and systems | 1999

The safe control of human-friendly robots

Jochen Heinzmann; Alexander Zelinsky

Introduces an approach to the control of robot manipulators in a way that is safe for humans in the robots workspace. Conceptually the robot is viewed as a tool with limited autonomy. The limited perception capabilities of automatic systems prohibits the construction of failsafe robots with the capabilities of people. Instead, the goal of our control scheme is to make the interaction with a robot manipulator safe by making the robots actions predictable and understandable to the human operator. At the same time the forces the robot applies with any part of its body to its environment have to be controllable and limited. Experimental results are presented of a human-friendly robot controller that is under development for a Barrett Whole Arm Manipulator robot.


international conference on robotics and automation | 1998

Range and pose estimation for visual servoing of a mobile robot

D. Jung; Jochen Heinzmann; A. Zelinksy

This paper describes the implementation of behaviour for real-time visual servoing on a mobile robot. The behaviour is a component of a multi-robot cleaning system developed in the context of our investigation into architectures for cooperative systems. An important feature for support of cooperation is the awareness of one robot by another, which this behaviour realises. Robust feature tracking aided by a hardware vision system is described. This forms the basis for range and pose estimation using a 3D projective model.


Advanced Robotics | 1996

A novel visual interface for human-robot communication

Alexander Zelinsky; Jochen Heinzmann

The purpose of a robot is to execute tasks for people. People should be able to communicate with robots in a natural way. People naturally express themselves through body language using facial gest...


international conference on intelligent transportation systems | 2008

Spatio-Temporal RANSAC for Robust Estimation of Ground Plane in Video Range Images for Automotive Applications

Faisal Mufti; Robert E. Mahony; Jochen Heinzmann

This paper considers the problem of ground plane estimation in range image data obtained from Time-of-Flight camera. We extend the 3D spatial RANSAC for ground plane estimation to 4D spatio-temporal RANSAC by incorporating a time axis. Ground plane models are derived from spatio-temporal random data points, thereby robustifying the algorithm against short term temporal effects such as passing cars, pedestrians, etc. The computationally fast and robust estimation of ground plane leads to reliable identification of obstacles and pedestrians using statistically derived spatial thresholds. Experimental results with real video data from range sensor mounted on a vehicle moving in a car park are presented.


international symposium on experimental robotics | 1999

Towards Human Friendly Robots: Vision-based Interfaces and Safe Mechanisms

Alexander Zelinsky; Yoshio Matsumoto; Jochen Heinzmann; Rhys Newman

To develop human friendly robots we required two key components; smart interfaces and safe mechanisms. Smart interfaces facilitate natural and easy interfaces for human-robot interaction. Facial gestures can be a natural way to control a robot. In this paper, we report on a vision-based interface that in real-time tracks a user’s facial features and gaze point. Human friendly robots must also have high integrity safety systems that ensure that people are never harmed. To guarantee human safety we require manipulator mechanisms in which all actuators are force controlled in a manner that prevents dangerous impacts with people and the environment. In this paper we present a control scheme for the Barrett-MIT whole arm manipulator (WAM) which allows people to safely interact with the robot.


Archive | 2011

4D Ground Plane Estimation Algorithm for Advanced Driver Assistance Systems

Faisal Mufti; Robert E. Mahony; Jochen Heinzmann

Over the last two decades there has been a significant improvement in automotive design, technology and comfort standards along with safety regulations and requirements. At the same time, growth in population and a steady increase in the number of road users has resulted in a rise in the number of accidents involving both automotive users as well as pedestrians. According to World Health Organization, road traffic accidents, including auto accidents and personal injury collisions account for the deaths of an estimated 1.2 million people worldwide each year, with 50 million or more suffering injuries (Organization, 2009). These figures are expected to grow by 20% within the next 20 years (Peden et al., 2004). In the European Union alone the imperative need for Advanced Driver Assistance Systems (ADAS) sensors can be gauged from the fact that every day the total number of people killed on Europe’s roads are almost the same as the number of people killed in a single medium-haul plane crash (Commission, 2001) with 3rd party road users (pedestrian, cyclist, etc) comprising the bulk of these fatalities (see Figure 1 for proportion of road injuries) (Sethi, 2008). This transforms into a direct and indirect cost on society, including physical and psychological damage to families and victims, with an economic cost of 160 billion euros annually (Commission, 2008). These statistics provide a strong motivation to improve the ADAS ability of automobiles for the safety of both passengers and pedestrians. The techniques to develop vision based ADAS depend heavily on the imaging device technology that provides continuous updates of the surroundings of the vehicle and aid

Collaboration


Dive into the Jochen Heinzmann's collaboration.

Top Co-Authors

Avatar

Alexander Zelinsky

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Faisal Mufti

Center for Advanced Studies in Engineering

View shared research outputs
Top Co-Authors

Avatar

Robert E. Mahony

Australian National University

View shared research outputs
Top Co-Authors

Avatar

A. Zelinksy

Australian National University

View shared research outputs
Top Co-Authors

Avatar

D. Jung

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Rhys Newman

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Sebastien Rougeaux

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Yoshio Matsumoto

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge