Nassir W. Oumer
German Aerospace Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nassir W. Oumer.
international conference on computer vision theory and applications | 2014
Nassir W. Oumer
A space object such as a satellite consists of highly specular surface, and when exposed to directional source of light, it is very difficult for visual tracking. However, camera-based tracking provides an inexpensive solution to the problem of on-orbit servicing of a satellite, such as orbital-life extension by repairing and refuelling, and debris removal. In this paper we present a real time pose tracking method applied to a such object under direct Sunlight, by adapting keypoint and edge-based approach, with known simple geometry. The implemented algorithm is relatively accurate and robust to specular reflection. We show the results which are based on real images from a simulation system of on-orbit servicing, consisting of two six degree of freedom robots, the Sun simulator and a full scale satellite mock-up.
ieee aerospace conference | 2015
Roberto Lampariello; Nassir W. Oumer; Jordi Artigas; Wolfang Rackl; Giorgio Panin; Ralf Purschke; Jan Harder; Ulrich Walter; Jürgen Frickel; Ismar Masic; Karhik Ravandoor; Julian Scharnagl; Klaus Schilling; Klaus Landzettel; Gerd Hirzinger
Orbital robotics is receiving growing attention worldwide for applications in servicing and repositioning of partially or fully defective satellites. In this paper, we present the scope and main results of a four-year research project, which aimed at developing necessary robotic technologies for such applications. The scope is two-fold, since we address both the human-operated robotic operational mode, referred to in robotics as force-feedback teleoperation, as well as the alternative autonomous mode, for the specific task of approaching and grasping a free-tumbling target satellite. We present methodological developments and experimental as well as numerical validations in the fields of tele-communications, computer vision, robot and spacecraft control and system identification. The results of this work constitute important advances in the fundamental building blocks necessary for the orbital applications of interest.
Archive | 2015
Nassir W. Oumer; Giorgio Panin
This paper focuses on vision-based detection and tracking of a nozzle of a satellite for rendezvous and proximity operation at very close range. For this purpose, on-board cameras can provide an effective solution in accuracy and robustness during the approach. However, the illumination conditions in space are especially challenging, due to the direct sunlight exposure, and to the glossy surface of a satellite. We propose an efficient tracking method that can be realized on standard processor, robustly dealing with the above issues exploiting model and image edges. The algorithm has been validated at the facility of the European Proximity Operations Simulator of DLR, using a ground simulation system that is able to reproduce sunlight conditions through a high power floodlight source, satellite surface properties using reflective foils, as well as complex motion trajectories with ground truth data.
Frontiers in Robotics and AI | 2018
Steffen Jaekel; Roberto Lampariello; Wolfgang Rackl; Marco De Stefano; Nassir W. Oumer; Alessandro M. Giordano; Oliver Porges; Markus Pietras; Bernhard Brunner; John Ratti; Quirin Muehlbauer; Markus Thiel; Stephane Estable; Robin Biesbroek; Alin Albu-Schaeffer
This paper presents a robotic capture concept that was developed as part of the e.deorbit study by ESA. The defective and tumbling satellite ENVISAT was chosen as a potential target to be captured, stabilized, and subsequently de-orbited in a controlled manner. A robotic capture concept was developed that is based on a chaser satellite equipped with a seven degrees-of-freedom dexterous robotic manipulator, holding a dedicated linear two-bracket gripper. The satellite is also equipped with a clamping mechanism for achieving a stiff fixation with the grasped target, following their combined satellite-stack de-tumbling and prior to the execution of the de-orbit maneuver. Driving elements of the robotic design, operations and control are described and analyzed. These include pre and post-capture operations, the task-specific kinematics of the manipulator, the intrinsic mechanical arm flexibility and its effect on the arms positioning accuracy, visual tracking, as well as the interaction between the manipulator controller and that of the chaser satellite. The kinematics analysis yielded robust reachability of the grasp point. The effects of intrinsic arm flexibility turned out to be noticeable but also effectively scalable through robot joint speed adaption throughout the maneuvers. During most of the critical robot arm operations, the internal robot joint torques are shown to be within the design limits. These limits are only reached for a limiting scenario of tumbling motion of ENVISAT, consisting of an initial pure spin of 5 deg/s about its unstable intermediate axis of inertia. The computer vision performance was found to be satisfactory with respect to positioning accuracy requirements. Further developments are necessary and are being pursued to meet the stringent mission-related robustness requirements. Overall, the analyses conducted in this study showed that the capture and de-orbiting of ENVISAT using the proposed robotic concept is feasible with respect to relevant mission requirements and for most of the operational scenarios considered. Future work aims at developing a combined chaser-robot system controller. This will include a visual servo to minimize the positioning errors during the contact phases of the mission (grasping and clamping). Further validation of the visual tracking in orbital lighting conditions will be pursued.
ieee aerospace conference | 2017
Martin Lingenauber; Klaus H. Strobl; Nassir W. Oumer; Simon Kriegel
This paper discusses the potential benefits of plenoptic cameras for robot vision during on-orbit servicing missions. Robot vision is essential for the accurate and reliable positioning of a robotic arm with millimeter accuracy during tasks such as grasping, inspection or repair that are performed in close range to a client satellite. Our discussion of the plenoptic camera technology provides an overview of the conceptional advantages for robot vision with regard to the conditions during an on-orbit servicing mission. A plenoptic camera, also known as light field camera, is basically a conventional camera system equipped with an additional array of lenslets, the micro lens array, at a distance of a few micrometers in front of the camera sensor. Due to the micro lens array it is possible to record not only the incidence location of a light ray but also its incidence direction on the sensor, resulting in a 4-D data set known as a light field. The 4-D light field allows to derive regular 2-D intensity images with a significantly extended depth of field compared to a conventional camera. This results in a set of advantages, such as software based refocusing or increased image quality in low light conditions due to recording with an optimal aperture while maintaining an extended depth of field. Additionally, the parallax between corresponding lenslets allows to derive 3-D depth images from the same light field and therefore to substitute a stereo vision system with a single camera. Given the conceptual advantages, we investigate what can be expected from plenoptic cameras during close range robotic operations in the course of an on-orbit servicing mission. This includes topics such as image quality, extension of the depth of field, 3-D depth map generation and low light capabilities. Our discussion is backed by image sequences for an on-orbit servicing scenario that were recorded in a representative laboratory environment with simulated in-orbit illumination conditions. We mounted a plenoptic camera on a robot arm and performed an approach trajectory from up to 2 m towards a full-scale satellite mockup. Using these images, we investigated how the light field processing performs, e.g. in terms of depth of field extension, image quality and depth estimation. We were also able to show the applicability of images derived from light fields for the purpose of the visual based pose estimation of a target point.
international symposium on visual computing | 2012
Giorgio Panin; Nassir W. Oumer
We describe an ego-motion algorithm based on dense spatio-temporal correspondences, using semi-global stereo matching (SGM) and bilateral image warping in time. The main contribution is an improvement in accuracy and robustness of such techniques, by taking care of speed and numerical stability, while employing twice the structure and data for the motion estimation task, in a symmetric way. In our approach we keep the tasks of structure and motion estimation separated, respectively solved by the SGM and by our pose estimation algorithm. Concerning the latter, we show the benefits introduced by our rectified, bilateral formulation, that provides at the same time more robustness to noise and disparity errors, at the price of a moderate increase in computational complexity, further reduced by an improved Gauss-Newton descent.
international conference on pattern recognition | 2012
Nassir W. Oumer; Giorgio Panin
Acta Astronautica | 2015
Nassir W. Oumer; Giorgio Panin; Quirin Mülbauer; Anastasia Tseneklidou
Archive | 2012
Nassir W. Oumer; Giorgio Panin
international symposium on artificial intelligence | 2018
Raul Dominguez; Shashank Govindaraj; Jeremi Gancet; Mark Post; Romain Michalec; Nassir W. Oumer; Bilal Wehbe; Alessandro Bianco; Alexander Fabisch; Simon Lacroix; Andrea De Maio; Quentin Labourey; Fabrice Souvannavong; Vincent Bissonnette; Michal Smisek; Xiu Yan