Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bernd Klauer is active.

Publication


Featured researches published by Bernd Klauer.


Archive | 2013

The Convey Hybrid-Core Architecture

Bernd Klauer

Hybrid Computing is a term that has originally been used for computations performed on analog/digital hardware and was popular until the late 1970s. Complex computations done under realtime conditions, such as signal processing, were left in the analog domain as the conversion times of A/D converters, sampling rates, and clock speeds of processors were significantly too low to solve complex equations in reasonable times. Today, processor layouts still contain analog components in the I/O areas as amplifiers, sensors or A/D converters. In the area of logical or arithmetical computations they became absolutely irrelevant. The renaissance that the term Hybrid Computing has experienced in recent years comes from a combination of hardwired multicore microprocessors and configurable integrated circuits (FPGAs). This chapter focusses on Hybrid Computing and Hybrid Core Computing which is a special form of Hybrid Computing introduced by Convey Computer Corporation.


international conference on industrial technology | 2016

Wireless sensor/actuator device configuration by NFC

Jan Haase; Dominik Meyer; Marcel Eckert; Bernd Klauer

In the area of building automation, many sensor or actuator devices are tiny embedded systems which are installed throughout the building. The sensors gather information about the current environment (e.g., the temperature, number of people in a room, etc.) and the actuators interact with the environment (e.g., controlling lights, heating, or door access). A central unit controls these devices (wirelessly or by wire), therefore all devices need at least a unique id throughout the system, and in many cases some more configuration or authentication data. New or replaced devices have to be registered with the central unit in order to enable correct control. Thus, the person installing a new device has to prepare it in terms of at least setting an id before it can be connected to the network. This can be tedious work, especially for untrained workers. This paper proposes adding an NFC module as an enhancement for standard devices. The devices can then be configured on site using a standard smart phone running an appropriate application. This eliminates the need to pre-configure devices using a programmer tool. The presented application enables (first-time or re-) configuration of wireless embedded devices. The prototype features an ATmega 328P micro-controller from Atmel and a M24SR02-Y NFC chip from STMicroelectronics.


ACM Sigarch Computer Architecture News | 2011

Multicore reconfiguration platform an alternative to RAMPSoC

Dominik Meyer; Bernd Klauer

The current state of the art in processor performance improvement is multicore-processor systems. These systems offer a number of homogeneous and static processor cores for the parallel distribution of computational tasks. A novel idea in this research field is introduced by the Runtime Adaptive Multi-Processor System-on- Chip (RAMPSoC) approach. It uses a dynamic and partial reconfigurable system to offer a heterogeneous multicore-processor system. It is runtime adaptable to applications needs and provides a high degree of freedom for system design and task distribution. The continuation of this idea is the Multicore Reconfiguration Platform (MRP) presented in this paper. Its fine grained reconfiguration framework offers a higher degree of freedom and achieves a better FPGA space exploitation, reduced power consumption and a more precise adaption to application requirements.


international conference on emerging security information, systems and technologies | 2009

List of Criteria for a Secure Computer Architecture

Igor Podebrad; Klaus Hildebrandt; Bernd Klauer

The security of a digital system depends directly onthe security of the hardware platform the system is based on.The analysis of currently available computer architectures hasshown that such systems offer a lot of security gaps. This is dueto the fact that in the past hardware has only been optimizedfor speed - never for security. In this paper we propose a set ofhardware features to support system security.


international conference on industrial informatics | 2016

A threat-model for building and home automation

Dominik Meyer; Jan Haase; Marcel Eckert; Bernd Klauer

Security and privacy are very important assets within building and home automation because the System Control Unit (SCU) stores and processes a huge amount of data about the inhabitants or employees of the building. This data is necessary for managing the building and increasing the convenience of persons within. But this data can also be used to create a movement profile, monitor working times, and draw conclusions about peoples health situation. Modern smart home implementations also control many actuators within the building including doors, windows, locks, and fire extinguisher. These increase security and safety, but unauthorized control can reduce the security and can even be harmful to persons. Therefore, identifying the different security and privacy threats is very important and helps system engineers and system managers to develop and deploy secure systems. This work presents an abstract model of a building automation system and some attack trees which simplify threat identification. Attack trees are common in secure software development and secure system deployment. An example smart home deployment is evaluated using the proposed model and attack trees to show the feasibility.


international conference on communications | 2013

Hardware Based Security Enhanced Direct Memory Access

Marcel Eckert; Igor Podebrad; Bernd Klauer

This paper presents an approach to prevent memory attacks enabled by DMA. DMA is a technique that is frequently used to release processors from simple memory transfers. DMA transfers are usually performed during idle times of the bus. A disadvantage of DMA transfers is that they are primarily unsupervised by anti malware agents. After the completion of a DMA activity the transfered data can be scanned for malicious codes. At this time the malicious structures are already in the memory and processor time is necessary to perform a malware scan. The approach presented in this paper enhances the DMA by a watchdog mechanisms that scans the data passing by and interrupts the processor after the detection of a malicious data or instruction sequence. Configurable hardware based on FPGAs is used to overcome the problem of frequently changing malware and malware signatures.


International Journal of Reconfigurable Computing | 2016

Operating System Concepts for Reconfigurable Computing

Marcel Eckert; Dominik Meyer; Jan Haase; Bernd Klauer

One of the key future challenges for reconfigurable computing is to enable higher design productivity and a more easy way to use reconfigurable computing systems for users that are unfamiliar with the underlying concepts. One way of doing this is to provide standardization and abstraction, usually supported and enforced by an operating system. This article gives historical review and a summary on ideas and key concepts to include reconfigurable computing aspects in operating systems. The article also presents an overview on published and available operating systems targeting the area of reconfigurable computing. The purpose of this article is to identify and summarize common patterns among those systems that can be seen as de facto standard. Furthermore, open problems, not covered by these already available systems, are identified.


2016 International Conference on FPGA Reconfiguration for General-Purpose Computing (FPGA4GPC) | 2016

Architectural requirements for constructing hardware supported sandboxes

Marcel Eckert; Jan Haase; Dominik Meyer; Bernd Klauer

Malicious stealth software can detect being executed in a virtual machine and thus behave differently. If the system virtualization however is moved to the hardware level, the malware is fooled and can be identified and monitored. This paper gives an overview of requirements for a hardware supported virtualization facility implemented on an FPGA. These requirements are examined along the lines of the basic parts of a typical computer architecture: the processor, memory, and devices. A proof-of-concept demonstrator was implemented on several Xilinx Evaluation boards.


2016 International Conference on FPGA Reconfiguration for General-Purpose Computing (FPGA4GPC) | 2016

Generic operating-system support for FPGAs

Dominik Meyer; Marcel Eckert; Jan Haase; Bernd Klauer

Field Programmable Gate Arrays (FPGAs) are by now common in industrial applications and research. The industry utilizes FPGAs for prototyping, small scale hardware productions, and telecommunication hardware. The deployment of FPGAs in research is often High Performance Computing (HPC) centric. But FPGAs are not used in General Purpose Computing (GPC) very often because of many factors, including missing Operating System (OS) support. This paper presents the idea of integrating FPGAs in OSs for standard personal computers, such as Linux, Mac OS X and Microsoft Windows. The goal of this integration is improving the acceptance of hardware acceleration for applications within software companies through separating hardware and software completely, and an easy hard- and software Application Programming Interface (API). Another goal is to improve the acceptance of FPGAs at the end-user by reducing the connection complexity of FPGAs and standard personal computers. The user should use them just by plug and play. To achieve these goals the paper introduces OS support for identifying and configuring FPGAs without vendor specific tools, and for bidirectional communication between the host computer and components configured inside the FPGA.


Proceedings of the 9th International Symposium on Highly-Efficient Accelerators and Reconfigurable Technologies - HEART 2018 | 2018

Low-Latency FIR Filter Structures Targeting FPGA Platforms

Piero Rivera Benois; Patrick Nowak; Udo Zölzer; Marcel Eckert; Bernd Klauer

Finite Impulse Response (FIR) filters are one of the basic building blocks in Digital Signal Processing. The computational complexity of their filtering process is determined by the length of their impulse responses. In high-order systems, the effective group-delay and phase response of the implemented filter may restrictively deviate from the designed one, due to the processing latency that adds on top of the group delay. This deviation may break causality or phase margin constraints within the system, decreasing the performance or correct functioning of it. A good example of systems under such constraints are feedforward and feedback active noise control digital implementations [2, 5]. Varied approaches can be used to decrease the overall complexity, if coefficients are known and have certain patterns [3, 6], a digit-serial architecture is used [4], or if frequency-domain filtering is applied [2]. Nevertheless, reducing the computational complexity (and with it probably increasing the implementation effort) is not the only way towards reducing the processing latency. In the present work, a time-domain sample-by-sample convolution based on precalculations of the output is proposed, which reduces the processing latency to the time required to calculate one multiplication and one addition. This is achieved by changing the chronological order of the calculations, under the constraint that the time to calculate the precalculations also have to fit within a sampling period. The implementation of it can be done based on a one-step or on an iterative precalculation process. To achieve higher order filters with less amount of calculation resources, the multiplications and additions needed for the precalculations are grouped into parallel lanes (in some extent similar to [7]) that sequentially solve partial results using the same calculation units. Although performing the same task, the one-step and iterative precalculations have different memory and logic requirements. Thus, an evaluation and comparison of both are presented in this work. In the following section, the precalculation strategy is explained. In Section 3, the one-step and iterative precalculations are described together with the grouping and parallelization strategy for reducing the needed computational resources.

Collaboration


Dive into the Bernd Klauer's collaboration.

Top Co-Authors

Avatar

Marcel Eckert

Helmut Schmidt University

View shared research outputs
Top Co-Authors

Avatar

Dominik Meyer

Helmut Schmidt University

View shared research outputs
Top Co-Authors

Avatar

Jan Haase

Helmut Schmidt University

View shared research outputs
Top Co-Authors

Avatar

Dietmar Wippig

Helmut Schmidt University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Patrick Nowak

Helmut Schmidt University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Udo Zölzer

Helmut Schmidt University

View shared research outputs
Researchain Logo
Decentralizing Knowledge