Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Ditter is active.

Publication


Featured researches published by Alexander Ditter.


international workshop on model checking software | 2012

On parallel software verification using boolean equation systems

Alexander Ditter; Milan Češka; Gerald Lüttgen

Multi- and many-core hardware platforms are today widely accessible and used to significantly accelerate many computationally demanding tasks. In this paper we describe a parallel approach to solve Boolean Equation Systems (BESs) in the context of model checking. We focus on the applicability of state-of-the-art, shared-memory parallel hardware --- multi-core CPUs and many-core GPUs --- to speed up the resolution procedure for BESs. In this setting, we experimentally show the scalability and competitiveness of our approach, compared to an optimized sequential implementation, based on a large benchmark suite containing models of software systems and protocols from industry and academia.


international symposium on industrial embedded systems | 2009

A wait-free queue for multiple enqueuers and multiple dequeuers using local preferences and pragmatic extensions

Philippe Stellwag; Alexander Ditter

Queues are one of the most commonly used data structures in applications and operating systems [1]. Up-and-coming multi-core processors force software developers to consider data structures in order to make them thread-safe. But, in real-time systems, e.g., robotic controls, parallelization is even more complicated as such systems must guarantee to meet their mostly hard deadlines. A considerable amount of research has been carried out on wait-free objects [2] to achieve this. Wait-freedom can guarantee that each potentially concurrent thread completes its operation within a bounded number of steps. But applicable wait-free queues, which supports multiple enqueue, dequeue and read operations, do not exist yet. Therefore, we present a statically allocated and statically linked queue, which supports arbitrary concurrent operations. Our approach is also applicable in other scenarios, where unsorted queues with statically allocated elements are used. Moreover, we introduce ‘local preferences’ to minimize contention. But, as the response times of our enqueue operation directly depends on the fill level, the response times of a nearly filled queue still remain an issue. Moreover, our approach is jitter-prone with a varying fill level. In this paper, we also address all of these issues with an approach using a helping queue. The results show that we can decrease the worst case execution time by approximately factor twenty. Additionally, we reduce the average response times of potentially concurrent enqueue operations in our queue. To the best of our knowledge, our wait-free queue is the best known and practical solution for an unsorted thread-safe queue for multiple enqueuers, multiple dequeuers and mulitple readers.


ieee acm international conference utility and cloud computing | 2015

Smarteco : an integrated solution from load balancing between the grid and consumers to local energy efficiency

Alexander Ditter; Dietmar Fey; Johannes Bürner; Jörg Franke

The transition from conventionally generated energy towards renewable energy sources is an important topic. Besides the need for new concepts for power plants this paradigm shift also implies new structural requirements for the grid. Especially, the on premise generation of energy via photovoltaics underlines the most important difference as compared to conventional energy generation and distribution. The structure of the grid, for most parts, still follows the pattern of a strong root connection from the power plant, transforming the energy for lower power distribution branches multiple times on the way to the consumer. One of the main characteristics of renewable energy sources is its more consumer local and distributed generation, where existing distribution paths will become more and more of a bottleneck in the future. One way to solve this problem would be to relinquish the current grid structure and replace it with a new more suitable one. Yet, such a fundamental structural change would require a very long time to be installed and induce cost far beyond benefit. The current solution, commonly called smart grid, is mostly driven by information, such as energy consumption amounts and times of customers, in order to match energy generation, distribution and consumption. Our SmartEco approach goes one step further and makes it possible, especially for small and local energy suppliers, to even offload some of the energy surplus from the grid into individual customer homes.


international conference on imaging systems and techniques | 2013

The impact of H.264/AVC on compression and non-destructive evaluation of piston data in industrial Computed Tomography

Alexander Ditter; Dietmar Fey; Tobias Schön; Maik Luxa; Roland Gruber

The application of video encoding for the compression of Computed Tomography (CT) based projection images is very promising with respect to a more efficient transmission and storage of this particular type of data. Especially the use of “off-the-shelf” technologies, such as the H.264/AVC codec, ensures long term support due to its widespread use in the media industry. We present an approach for the application of this standard 8-bit video codec even for 16-bit projection data sets. Based on a benchmark set of 200 pistons, we evaluate its performance in terms of the compression rate relative to the original input data. Furthermore, we compare the detection rate of manufacturing defects in the reconstructed volumes to the state-of-the-art technique without video encoding. For this purpose our benchmark does not only contain the projections of the 200 pistons, but additionally the exact information about location and size of the manufacturing defects of the respective piston. We show that the detection rate based on the video encoded projection data is, up to a certain threshold, as good as without compression.


european conference on computer systems | 2017

Fe2vCl2: From Bare Metal to High Performance Computing on Virtual Clusters and Cloud Infrastructure

Alexander Ditter; Gabriel Graf; Dietmar Fey

Container based virtualization techniques are playing a more and more important role in cloud and service oriented computing. Technologies, such as Docker, enable the development and deployment of new and especially light weight applications and allow for a more cloud provider independent operation. Even though cloud infrastructure and containers offer many advantages, both technologies have yet to be widely utilized in the field of High Performance Computing. In this paper we present Fe2vCl2, a framework for enabling HPC applications on common cloud infrastructure. Using Docker to integrate it into cloud environments, our framework allows the on-demand deployment of virtual clusters for the parallel execution of HPC applications. We describe our overall application architecture along with the advantages of both technologies, especially with regard to HPC applications. In order to verify our implementation we evaluate it using different HPC applications and benchmarks.


international symposium on industrial embedded systems | 2016

Improving instruction accurate simulation for parallel automotive applications

Dominik Schoenwetter; Alexander Ditter; Dietmar Fey; Ralph Mader

High level simulation and modeling techniques have matured significantly over the last years and have become more and more important in practice, e.g., in the industrial hardware development and especially the automotive domain. Complex and detailed modeling requires a lot of time during preparation and execution, is quite error prone and thus, reduces the average time-to-market significantly. One popular approach to mitigate this problem is statistical modeling and simulation. In this paper, we focus on another high level simulation approach for determining accurate runtimes of applications using instruction accurate modeling and simulation. We extend the basic instruction accurate simulation technology from OVP using cache models in conjunction with a statistical cost function, which enables high precision runtime predictions with an significant improvement over the pure instruction accurate approach.


international conference on simulation and modeling methodologies technologies and applications | 2016

Cache aware instruction accurate simulation of a 3-D coastal ocean model on low power hardware

Dominik Schoenwetter; Alexander Ditter; Vadym Aizinger; Balthasar Reuter; Dietmar Fey

High level hardware simulation and modeling techniques matured significantly over the last years and have become more and more important in practice, e.g., in the industrial hardware development and automotive domain. Yet, there are many other challenging application areas such as numerical solvers for environmental or disaster prediction problems, e.g., tsunami and storm surge simulations, that could greatly profit from accurate and efficient hardware simulation. Such applications rely on complex mathematical models that are discretized using suitable numerical methods, and require a close collaboration between mathematicians and computer scientists to attain desired computational performance on current micro architectures and code parallelization techniques to produce accurate simulation results as fast as possible. This complex and detailed simulation requires a lot of time during preparation and execution. Especially the execution on non-standard or new hardware may be challenging and potentially error prone. In this paper, we focus on a high level simulation approach for determining accurate runtimes of applications using instruction accurate modeling and simulation. We extend the basic instruction accurate simulation technology from OVP using cache models in conjunction with a statistical cost function, which enables high precision and significantly better runtime predictions compared to the pure instruction accurate approach.


Archive | 2016

Virtualization Guided Tsunami and Storm Surge Simulations for Low Power Architectures

Dominik Schoenwetter; Alexander Ditter; Bruno Kleinert; Arne Hendricks; Vadym Aizinger; Dietmar Fey

Performing a tsunami or storm surge simulation in real time on low power computation devices is a highly challenging research topic with a big impact on the lives of many people. In order to advance this topic further a tight collaboration between mathematics and computer science is needed. Mathematical models must be combined with numerical methods which, in turn, directly determine the computational performance and efficiency of the solution. Also, code parallelization is required in order to obtain accurate and fast simulation results. Traditional approaches in high performance computing require a lot of computational power and significant amounts of electrical energy; they are also highly dependent on uninterrupted access to a reliable network and power supply. We present a concept how to develop solutions for suitable low power hardware architectures for tsunami and storm surge simulations based on cooperative software and hardware simulation. The main goal is to enable in situ simulations on potentially battery-powered device on site. Flood warning systems in regions with weak or unreliable power, network and computing infrastructure could largely benefit from our approach as it would significantly decrease the risk of network or power failure during the computation.


international conference on simulation and modeling methodologies technologies and applications | 2015

Tsunami and Storm Surge Simulation Using Low Power Architectures

Dominik Schoenwetter; Alexander Ditter; Bruno Kleinert; Arne Hendricks; Vadym Aizinger; Harald Koestler; Dietmar Fey

Performing a tsunami or storm surge simulation in real time is a highly challenging research topic that calls for a collaboration between mathematicians and computer scientists. One must combine mathematical models with numerical methods and rely on computational performance and code parallelization to produce accurate simulation results as fast as possible. The traditional modeling approaches require a lot of computing power and significant amounts of electrical energy; they are also highly dependent on uninterrupted access to a reliable power supply. This paper presents a concept how to develop suitable low power hardware architectures for tsunami and storm surge simulations based on cooperative software and hardware simulation. The main goal is to be able - if necessary - to perform simulations in-situ and battery-powered. For flood warning systems installed in regions with weak or unreliable power and computing infrastructure, this would significantly decrease the risk of failure at the most critical moments.


international conference on imaging systems and techniques | 2015

Multi-GPU based evaluation and analysis of prehistoric ice cores using OpenCL

Alexander Ditter; Roman Schaffert; Dietmar Fey; Tobias Schön; Roland Gruber

The analysis of prehistoric ice cores is a well established instrument in the field of climate research. Until recently, common methods were often based on the analysis of carbon dioxide and methane concentrations. The use of computed tomography based 3-D reconstructions for the evaluation and analysis of prehistoric ice cores yields the possibility to improve the accuracy of age determination by an order order of magnitude, from hundreds of years to decades. This, in turn, allows the improvement of the underlying model of the climatic development over the last several hundreds of thousands of years. The use of 3-D volumes allows a much more detailed analysis with respect to the size, amount, distribution and connectivity of air bubbles in the ice cores as a new climatic proxy. In this setting, we present a GPU-based approach for the efficient evaluation and analysis of air bubbles using OpenCL. As the raw data size can grow up to 10 TB per meter of ice core, we focus on a distributable and scalable approach, which is based on component labeling and can be scaled to multiple-GPUs using OpenCL.

Collaboration


Dive into the Alexander Ditter's collaboration.

Top Co-Authors

Avatar

Dietmar Fey

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Dominik Schoenwetter

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Vadym Aizinger

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Arne Hendricks

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Bruno Kleinert

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Franz Richter-Gottfried

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Anton Kuzmin

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Balthasar Reuter

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Benedikt Oehlrich

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Gabriel Graf

University of Erlangen-Nuremberg

View shared research outputs
Researchain Logo
Decentralizing Knowledge