Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Lüdtke is active.

Publication


Featured researches published by Daniel Lüdtke.


ieee aerospace conference | 2014

OBC-NG: Towards a reconfigurable on-board computing architecture for spacecraft

Daniel Lüdtke; Karsten Westerdorff; Kai Stohlmann; Anko Börner; Olaf Maibaum; Ting Peng; Benjamin Weps; Görschwin Fey; Andreas Gerndt

The computational demands on spacecraft are rapidly increasing. Current on-board computing components and architectures cannot keep up with the growing requirements. Only a small selection of space-qualified processors and FPGAs are available and current architectures stick with the inflexible cold-redundant structure. The objective of the ongoing project OBC-NG (On-board Computer - Next Generation) is to find new concepts for on-board-computer to fulfill future requirements. The concept presented in this paper is based on a distributed reconfigurable system, consisting of different nodes for processing, management and interface operations. OBC-NG will exploit the high performance of commercial off-the-shelf (COTS) hardware parts. To compensate the shortcomings of COTS parts the OBC-NG redundancy approach differs from the classic way and error mitigation techniques will work mainly on software level. This paper discusses the hardware and software architecture of the system as well as the redundancy and reconfiguration concept. Our ideas will be proven in an OBC-NG prototype, planned for the next year.


workshops on enabling technologies infrastracture for collaborative enterprises | 2012

Collaborative Development and Cataloging of Simulation and Calculation Models for Space Systems

Daniel Lüdtke; Jean-Sébastien Ardaens; Meenakshi Deshmukh; Rosa Paris Lopez; Andy Braukhane; Ivanka Pelivan; Stephan Theil; Andreas Gerndt

The application of modeling and simulation in the design, development and validation process of complex systems has significantly increased in the last decades. Creating high quality models is a time-consuming task. Particularly, if models should be shared and reused in future projects. In this paper, results of the project Simulation Model Library (SimMoLib) are presented that address the issues concerning the preservation of knowledge that lies within simulation and calculation models. SimMoLib provides modeling guidelines and best practices to help the developer to prepare models that can be reused in other contexts. Validation and verification of these models is covered by proposing a set of guidelines and two test frameworks as well as by promoting peer reviews of models by other experts. To archive, catalogue, and distribute models, a software framework is under development which supports the creation, management, retrieval, and utilization of models. In order to allow the collaborative editing of calculation and simulation models a simplified version control mechanism is established. SimMoLib is currently targeted to support the development of space systems. In the future it will be opened to other domains.


AIAA SPACE 2013 Conference and Exposition | 2013

A Continuous Verification Process in Concurrent Engineering

Volker Schaus; Michael Tiede; Philipp M. Fischer; Daniel Lüdtke; Andreas Gerndt

This paper presents how a continuous mission verification process similar than in software engineering can be applied in early spacecraft design and Concurrent Engineering. Following the Model-based Systems Engineering paradigm, all engineers contribute to one single centralized data model of the system. The data model is enriched with some extra information to create an executable representation of the spacecraft and its mission. That executable scenario allows for verifications against requirements that have been formalized using appropriate techniques from the field of formal verification. The paper focuses on a current approach of integrating this verification mechanism into our Concurrent Engineering environment. In an example study, we explain how basic mission requirements are created at the beginning of the spacecraft design. After each iteration and change, the integrated verification will be executed. This instantly highlights the effects of the modification and points out potential problems in the design. Using the continuous verification process alongside the Concurrent Engineering process helps to mature both, the requirements and the design itself.


workshops on enabling technologies: infrastracture for collaborative enterprises | 2011

Collaborative Development of a Space System Simulation Model

Volker Schaus; Karsten Großekatthöfer; Daniel Lüdtke; Andreas Gerndt

Modeling and simulation is a powerful method to evaluate the design of a space system. Simulation models represent valuable knowledge and require considerable time and effort for their development. Means for reuse should be taken into account from the beginning of model creation. This paper presents a collaborative model development process, which creates prerequisite information for successful reuse of simulation models. It introduces a knowledge model and proposes reviewed documentation at each step in the process. These pieces of documentation enable successive reuse at different levels. The modeling process was evaluated by creating a system simulation of the OOV-TET-1 satellite including three satellite subsystems, dynamics, kinematics, and space environment. Furthermore, the organization of models and their documentation artifacts is crucial in order to search, find, and reuse models across project partners and across projects. The paper suggests a flexible model database that suits the special requirements of typical space projects and large research organizations.


Simulation | 2008

Chip Multiprocessor Traffic Models Providing Consistent Multicast and Spatial Distributions

Dietmar Tutsch; Daniel Lüdtke

Chip multiprocessors (CMPs) have become the center of attention in recent years. They consist of multiple processor cores on a single chip. These cores are connected on-chip by a bus or, if many cores are involved, by an appropriate network. To investigate how a multicore processor behaves dependent on the chosen network-on-chip topology, a corresponding model must be established for performance evaluation. Modeling and simulating the entire system would lead to high model complexity. Thus, it is more reasonable to exclude the cores and to simply model stochastically the detached network. The cores are replaced by traffic generators which must provide reasonable CMP traffic. It usually consists of multicasts and a particular spatial distribution. Because the traffic is not known exactly, both multicasts and spatial traffic are described as stochastic distributions for model input. The easiest way is to specify the spatial distribution of the traffic and the kind of multicasts independently of each other. However, not all multicast distributions can be achieved with a particular desired spatial distribution and vice versa. It is therefore important to check for the compatibility of the spatial distribution and the multicasts that the modeler is willing to investigate. Such a compatibility check is provided by the algorithm presented in this paper. It prevents inconsistent traffic parameters while modeling.


Nature | 2018

Space-borne Bose–Einstein condensation for precision interferometry

Dennis Becker; Maike Diana Lachmann; Stephan Seidel; Holger Ahlers; Aline Dinkelaker; Jens Grosse; Ortwin Hellmig; Hauke Müntinga; Vladimir Schkolnik; Thijs Wendrich; André Wenzlawski; Benjamin Weps; Robin Corgier; Tobias Franz; Naceur Gaaloul; Waldemar Herr; Daniel Lüdtke; Manuel Popp; Sirine Amri; Hannes Duncker; Maik Erbe; Anja Kohfeldt; André Kubelka-Lange; Claus Braxmaier; Eric Charron; W. Ertmer; Markus Krutzik; Claus Lämmerzahl; Achim Peters; Wolfgang P. Schleich

Owing to the low-gravity conditions in space, space-borne laboratories enable experiments with extended free-fall times. Because Bose–Einstein condensates have an extremely low expansion energy, space-borne atom interferometers based on Bose–Einstein condensation have the potential to have much greater sensitivity to inertial forces than do similar ground-based interferometers. On 23 January 2017, as part of the sounding-rocket mission MAIUS-1, we created Bose–Einstein condensates in space and conducted 110 experiments central to matter-wave interferometry, including laser cooling and trapping of atoms in the presence of the large accelerations experienced during launch. Here we report on experiments conducted during the six minutes of in-space flight in which we studied the phase transition from a thermal ensemble to a Bose–Einstein condensate and the collective dynamics of the resulting condensate. Our results provide insights into conducting cold-atom experiments in space, such as precision interferometry, and pave the way to miniaturizing cold-atom and photon-based quantum information concepts for satellite-based implementation. In addition, space-borne Bose–Einstein condensation opens up the possibility of quantum gas experiments in low-gravity conditions1,2.A Bose–Einstein condensate is created in space that has sufficient stability to enable its characteristic dynamics to be studied.


ieee aerospace conference | 2013

Knowledge Management tools integration within DLR's concurrent engineering facility

Rosa Paris Lopez; Geeta Soragavi; Meenakshi Deshmukh; Daniel Lüdtke

The complexity of space endeavors has increased the need for Knowledge Management (KM) tools. The concept of KM involves not only the electronic storage of knowledge, but also the process of making this knowledge available, reusable and traceable. Establishing a KM concept within the Concurrent Engineering Facility (CEF) has been a research topic of the German Aerospace Centre (DLR). This paper presents the current KM tools of the CEF: the Software Platform for Organizing and Capturing Knowledge (S.P.O.C.K.), the data model Virtual Satellite (VirSat), and the Simulation Model Library (SimMoLib), and how their usage improved the Concurrent Engineering (CE) process. This paper also exposes the lessons learned from the introduction of KM practices into the CEF and elaborates a roadmap for the further development of KM in CE activities at DLR. The results of the application of the Knowledge Management tools have shown the potential of merging the three software platforms with their functionalities, as the next step towards the fully integration of KM practices into the CE process. VirSat will stay as the main software platform used within a CE study, and S.P.O.C.K. and SimMoLib will be integrated into VirSat. These tools will support the data model as a reference and documentation source, and as an access to simulation and calculation models. The use of KM tools in the CEF aims to become a basic practice during the CE process. The settlement of this practice will result in a much more extended knowledge and experience exchange within the Concurrent Engineering environment and, consequently, the outcome of the studies will comprise higher quality in the design of space systems.


ieee aerospace conference | 2013

A formal method for early spacecraft design verification

Philipp M. Fischer; Daniel Lüdtke; Volker Schaus; Andreas Gerndt

In the early design phase of a spacecraft, various aspects of the system under development are described and modeled using parameters such as masses, power consumption or data rates. In particular power and data parameters are special since their values can change depending on the spacecrafts operational mode. These mode-dependent parameters can be easily verified to static requirements like a maximumdata rate. Such quick verifications allow the engineers to check the design after every change they apply. In contrast, requirements concerning the mission lifetime such as the amount of downlinked data during the whole mission, demands a more complex procedure. We propose an executable model together with a simulation framework to evaluate complex mission scenarios. In conjunction with a formalized specification of mission requirements it allows a quick verification by means of formal methods.


Archive | 2006

An Analyzable On-Chip Network Architecture for Embedded Systems

Daniel Lüdtke; Dietmar Tutsch; Günter Hommel

The increasing integration level of modern and forthcoming Integrated Circuits (ICs) allows the implementation of complex systems on a single chip (System-on-Chip – SoC) [1]. To reduce time to market in the design flow of chip development prefabricated components (known as Intellectual Property – IP) are used. IP cores can be CPUs, memory blocks, signal processing units, etc. Simplified, the design of new complex customized ICs often represents a composition of IP cores. The main task of a chip designer is the selection, parameterization, and interconnection of the different cores. Currently, the interconnection between the IP cores are realized with either dedicated point-to-point interconnections or standardized system buses. Dedicated point-to-point connections are only manageable and economically feasible in smaller systems. With increasing complexity, it is not possible to connect every core with dedicated wires. Buses, on the other hand, are an example for shared communication resources. The system design with a standardized bus and IP cores with an interface to the bus in question, becomes much simpler. Examples for on-chip buses are IBM Core Connect, AMBA-Bus by ARM, and the VCI-Standard by the VSIA. However, as a drawback, buses are not scalable for larger designs. The communication between cores becomes the performance bottleneck of the system. SoC design tends to replace buses with packet-switched interconnection networks [5]. The architecture of switched on-chip networks are similar to


integrated formal methods | 2017

Task-Node Mapping in an Arbitrary Computer Network Using SMT Solver

Andrii Kovalov; Elisabeth Lobe; Andreas Gerndt; Daniel Lüdtke

The problem of mapping (assigning) application tasks to processing nodes in a distributed computer system for spacecraft is investigated in this paper. The network architecture is developed in the project ‘Scalable On-Board Computing for Space Avionics’ (ScOSA) at the German Aerospace Center (DLR). In ScOSA system the processing nodes are connected to a network with an arbitrary topology. The applications are structured as directed graphs of periodic and aperiodic tasks that exchange messages. In this paper a formal definition of the mapping problem is given. We demonstrate several ways to formulate it as a satisfiability modulo theories (SMT) problem and then use Z3, a state-of-the-art SMT solver, to produce the mapping. The approach is evaluated on a mapping problem for an optical navigation application as well as on a set of randomly generated task graphs.

Collaboration


Dive into the Daniel Lüdtke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Olaf Maibaum

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar

Dietmar Tutsch

International Computer Science Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ting Peng

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar

Görschwin Fey

Hamburg University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kai Borchers

German Aerospace Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge