Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hans-Ullrich Doehler is active.

Publication


Featured researches published by Hans-Ullrich Doehler.


Proceedings of SPIE | 2009

ALLFlight - A full scale enhanced and synthetic vision sensor suite for helicopter applications

Hans-Ullrich Doehler; Thomas Lueken; R. Lantzsch

In 2008 the German Aerospace Center (DLR) started the project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites). This project deals with the integration of a full scale enhanced vision sensor suite onto the DLRs research helicopter EC135. This sensor suite consists of a variety of imaging sensors, including a color TV camera and an un-cooled thermal infrared camera. Two different ranging sensors are also part of this sensor suite: an optical radar scanner and a millimeter wave radar system. Both radar systems are equipped with specialized software for experimental modes, such as terrain mapping and ground scanning. To be able to process and display the huge incoming flood of data from these sensors, a compact high performance sensor co-computer system (SCC) has been designed and realized, which can easily be installed into the helicopters cargo bay. A sophisticated, high performance, distributed data acquisition, recording, processing, and fusion software architecture has been developed and implemented during the first project year. The paper describes the challenging mechanical integration of such a comprehensive sensor suite onto the EC135 and explains the architectural hard- and software concept and the implementation on the SCC.


Proceedings of SPIE | 2014

Visual-conformal display format for helicopter guidance

Hans-Ullrich Doehler; Sven Schmerwitz; Thomas Lueken

Helicopter guidance in situations where natural vision is reduced is still a challenging task. Beside new available sensors, which are able to “see” through darkness, fog and dust, display technology remains one of the key issues of pilot assistance systems. As long as we have pilots within aircraft cockpits, we have to keep them informed about the outside situation. “Situational awareness” of humans is mainly powered by their visual channel. Therefore, display systems which are able to cross-fade seamless from natural vision to artificial computer vision and vice versa, are of greatest interest within this context. Helmet-mounted displays (HMD) have this property when they apply a head-tracker for measuring the pilot’s head orientation relative to the aircraft reference frame. Together with the aircraft’s position and orientation relative to the world’s reference frame, the on-board graphics computer can generate images which are perfectly aligned with the outside world. We call image elements which match the outside world, “visual-conformal”. Published display formats for helicopter guidance in degraded visual environment apply mostly 2D-symbologies which stay far behind from what is possible. We propose a perspective 3D-symbology for a head-tracked HMD which shows as much as possible visual-conformal elements. We implemented and tested our proposal within our fixed based cockpit simulator as well as in our flying helicopter simulator (FHS). Recently conducted simulation trials with experienced helicopter pilots give some first evaluation results of our proposal.


international conference on pattern recognition | 2000

Weather independent flight guidance: analysis of MMW radar images for approach and landing

Bernd Korn; Hans-Ullrich Doehler; Peter Hecker

We present a system which provides such navigation information based on the analysis of millimeter wave (MMW) radar data. The advantage of MMW radar sensors is that the data acquisition is independent from the actual weather and daylight situation. The core part of the presented system is a fuzzy rule based inference machine which controls the data analysis based on the uncertainty in the actual knowledge in combination with a-priori knowledge. Compared with standard TV or IR images the quality of MMW images is rather poor and the data are highly corrupted with noise and clutter. Therefore, one main task of the inference machine is to handle uncertainties as well as ambiguities and inconsistencies to draw the right conclusions. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.


Proceedings of SPIE | 2016

A concept for a virtual flight deck shown on an HMD

Johannes M. Ernst; Hans-Ullrich Doehler; Sven Schmerwitz

A combination of see-through head-worn or helmet-mounted displays (HMDs) and imaging sensors is frequently used to overcome the limitations of the human visual system in degraded visual environments (DVE). A visual-conformal symbology displayed on the HMD allows the pilots to see objects such as the landing site or obstacles being invisible otherwise. These HMDs are worn by pilots sitting in a conventional cockpit, which provides a direct view of the external scene through the cockpit windows and a user interface with head-down displays and buttons. In a previous publication, we presented the advantages of replacing the conventional head-down display hardware by virtual instruments. These virtual aircraft-fixed cockpit instruments were displayed on the Elbit JEDEYE system, a binocular, see-through HMD. The idea of our current work is to not only virtualize the display hardware of the flight deck, but also to replace the direct view of the out-the-window scene by a virtual view of the surroundings. This imagery is derived from various sensors and rendered on an HMD, however without see-through capability. This approach promises many advantages over conventional cockpit designs. Besides potential weight savings, this future flight deck can provide a less restricted outside view as the pilots are able to virtually see through the airframe. The paper presents a concept for the realization of such a virtual flight deck and states the expected benefits as well as the challenges to be met.


Enhanced and synthetic vision 2000. Conference | 2000

MMW radar based navigation : Solutions of the vertical position problem

Bernd Korn; Hans-Ullrich Doehler; Peter Hecker

The acquisition of navigation data is an important upgrade of enhanced vision (EV) systems. E.g. the position from an aircraft relative to the runway during landing approaches has to be derived from data of the EV sensors directly, if no ILS or GPS navigation information is available. Due to its weather independence MMW radar plays an important role among possible EV sensors. Generally, information about the altitude of the aircraft relative to a target ahead (the runway) is not available within radar data. A common approach to overcome this so called vertical position problem is the use of the Flat Earth Assumption, i.e. the altitude above the runway is assumed to be the same as the actual altitude of the aircraft measured by the radar altimeter. Another approach known from literature is to combine different radar images from different positions similar to stereo and structure from motion approaches in computer vision. In this paper we present a detailed investigation of the latter approach. We examine the correspondence problem with regard to the special geometry of radar sensors as well as the principle methodology to estimate 3D information from different rage angle measurements. The main part of the contribution deals with the question of accuracy: What accuracy can be obtained? What are the influences of factors like vertical beam width range and angular resolution of the sensor relative transformation between different sensor locations, etc. Based on this investigation, we introduce a new approach for vertical positioning. As a special benefit this methods delivers a measurement of validity which allows the judgement of the estimation of the relative vertical position from sensor to target. The performance of our approach is demonstrated with both simulated data and real data acquired during flight tests.


Enhanced and Synthetic Vision 1999 | 1999

MMW radar data processing for enhanced vision

Bernd Korn; Hans-Ullrich Doehler; Peter Hecker

Comprehensive situation awareness is very important for aircrews to handle complex situations like landing approaches or taxiing, especially under adverse weather conditions. Thus, DLRs Institute of Flight Guidance is developing an Enhanced Vision System that uses different forward looking imaging sensors to gain information needed for executing given tasks. Furthermore, terrain models, if available, can be used to control as well as to support the sensor data processing. Up to now, the most promising sensor due to its lowest weather dependency compared to other imaging sensors seems to be a 35 GHz MMW radar from DASA, Ulm, which provides range data with a frame rate of about 16 Hz. In previous contributions first experimental results of our radar data processing have been presented. In this paper we deal with radar data processing in more detail. Automatic extraction of relevant features for landing approaches and taxiing maneuvers will be focused. In the first part of this contribution we describe a calibration of the MMW radar which is necessary to determine the exact relationship between raw sensor data (pixels) and world coordinates. Furthermore, a calibration gives us an idea how accurate features can be located in the world. The second part of this paper is about our approach for automatically extracting features relevant for landing and taxiing. Improvements of spatial resolution as well as noise reduction are achieved with a multi frame approach. The correspondence of features in different frames is found with the aid of navigation sensors like INS or GPS, but can also be done by tracking methods. To demonstrate the performance of our approach we applied the extraction method on simulated data as well as on real data. The real data have been acquired using a test van and a test aircraft, both equipped with a prototype of the imaging MMW Radar from DASA, Ulm.


Enhanced and Synthetic Vision 1999 | 1999

Enhanced vision meets pilot assistance

Peter Hecker; Hans-Ullrich Doehler; Reiner Suikat

As presented in previous contributions within SPIEs Enhanced and Synthetic Vision Conferences, DLRs Institute of Flight Guidance is involved in the design, development and testing of enhanced vision systems for flight guidance applications. The combination of forward looking imaging sensors (such as DaimlerChryslers HiVision millimeter wave radar), terrain data stored in on-board databases plus information transmitted from ground or on-board other aircraft via data link is used to give the air crew an improved situational awareness. This helps pilots to handle critical tasks, such as landing approaches and taxiing especially under adverse weather conditions. The research and development of this system was mostly funded by a national research program from mid of 1996 to mid of 1999. On one hand this paper will give a general overview about the project and the lessons learned. Results of flight tests carried out recently will be shown as well as brief looks into evaluation tests on-board an Airbus A-340 full flight simulator performed mid of 1998 at the Flight Simulation Center Berlin. On the other hand an outlook will be presented, which shows enhanced vision systems as a major player in the theater of pilot assistance systems as they are under development at DLRs Institute of Flight Guidance in close cooperation with the University of the Federal Armed Forces in Munich, Germany.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Phong-like lighting for MMW radar simulation

Niklas Peinecke; Hans-Ullrich Doehler; Bernd Korn

Radar simulation involves the computation of a radar response based on the terrains normalized radar cross section (RCS). In the past different models have been proposed for modeling the normalized RCS. While being accurate in most cases they lack intuitive handling. We present a novel approach for computing the mean normalized radar cross section for use in millimeter wave radar simulations based on Phong lighting. This allows us to model radar power return in an intuitive way using categories of diffuse and specular reflections. The model is computational more efficient than previous approaches while using only few parameters. Furthermore, we give example setups for different types of terrain. We show that our technique can accurately model data output from other approaches as well as real world data.


international conference on image analysis and recognition | 2010

Image-Based drift and height estimation for helicopter landings in brownout

Hans-Ullrich Doehler; Niklas Peinecke

After years of experiences regarding enhanced and synthetic vision research projects in the fixed wing domain, the DLRs Institute of Flight Guidance is now addressing helicopter applications as well. The project ALLFlight is one example. The main objective of this project is to demonstrate and evaluate the characteristics of different sensors for helicopter operations within degraded visual environments, such as brownout or whiteout. Radar, Lidar, IR and TV cameras are part of this sensor-suite. Although this project aims for the large solution of the brownout problem, there are also simple small solutions investigated, which can be installed in low cost helicopters as well. The following paper is dealing with such an approach. We assume a pair of off-the-shelf vertical looking cameras and an inertial attitude system, which are feeding their data into a low-budget processing system. The outcome consists of the helicopters lateral drift and its altitude above the ground. These data can be shown easily to the pilot on a cheap display. The paper describes the proposed setup and methods for drift and height measuring.


Optical Engineering | 2017

Conformal Displays: Human Factors Analysis of Innovative Landing Aids

Sven Schmerwitz; Thomas Lueken; Hans-Ullrich Doehler; Niklas Peinecke; Johannes M. Ernst; David L. da Silva Rosa

Abstract. In the past couple of years, research on display content for helicopter operations headed in a new direction. The already reached goals could evolve into a paradigm change for information visualization. Technology advancements allow implementing three-dimensional and conformal content on a helmet-mounted see-through device. This superimposed imagery inherits the same optical flow as the environment. It is supposed to ease switching between display information and environmental cues. The concept is neither pathbreaking nor new, but it has not been successfully established in aviation yet. Nevertheless, there are certainly some advantages to expect—at least from perspective of a human-centered system design. Within the following pages, the next generation displays will be presented and discussed with a focus on human factors. Beginning with recalling some human factor related research facts, an experiment comparing the former two-dimensional research displays will be presented. Before introducing the DLR conformal symbol set and the three experiments about an innovative drift, indication related research activities toward conformal symbol sets will be addressed.

Collaboration


Dive into the Hans-Ullrich Doehler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernd Korn

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Hecker

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernd Lorenz

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge