Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oliver Oey is active.

Publication


Featured researches published by Oliver Oey.


international conference on embedded computer systems: architectures, modeling, and simulation | 2011

Heterogeneous and runtime parameterizable Star-Wheels Network-on-Chip

Diana Göhringer; Oliver Oey; Michael Hübner; Jürgen Becker

Multiprocessor Systems-on-Chip (MPSoCs) are a promising solution to fulfill the requirements of high performance computing applications, such as image processing or bioinformatics. Many different MPSoC architectures are introduced by industry and academics, such as Intels Single Cloud Computer (SCC) or IBMs Cell. To achieve a high efficiency on such MPSoCs a high computing performance in the processing elements (PE) is not the only parameter. The bandwidth of the memory and especially the on-chip communication infrastructure are extremely important to achieve the requirements of the applications. Furthermore, for embedded System-on-Chip solutions, flexibility is extremely important for providing a good tradeoff between performance and power consumption. Especially, for general purpose MPSoCs, designed for a wide range of applications with different communication demands, this flexibility is essential for a good energy efficiency. A simple bus or Network-on-Chip (NoC) with fixed bandwidth and communication paradigm is not sufficient in this varying application scenario and its design space. In this paper novel features for the heterogeneous and runtime adaptive Star-Wheels Network-on-Chip are presented. This Network-on-Chip is used within a runtime adaptive MPSoC, which enables to tailor processors and accelerators to the requirements of the applications. For optimizing the Star-Wheels Network-on-Chip and its communication protocol at runtime to the application requirement the following novel features have been added: the combination of the circuit- and the packet-switched communication paradigm and an algorithm-based placement of the processing elements within the NoC. The integration of the novel algorithm into the design methodology and the exploitation of the novel features within the special purpose runtime operating system of a runtime adaptive MPSoC are also introduced.


Microprocessors and Microsystems | 2013

Compiling Scilab to high performance embedded multicore systems

Timo Stripf; Oliver Oey; Thomas Bruckschloegl; Juergen Becker; Gerard K. Rauwerda; Kim Sunesen; George Goulas; Panayiotis Alefragis; Nikolaos S. Voros; Steven Derrien; Olivier Sentieys; Nikolaos Kavvadias; Grigoris Dimitroulakos; Kostas Masselos; Dimitrios Kritharidis; Nikolaos Mitas; Thomas Perschke

The mapping process of high performance embedded applications to todays multiprocessor system-on-chip devices suffers from a complex toolchain and programming process. The problem is the expression of parallelism with a pure imperative programming language, which is commonly C. This traditional approach limits the mapping, partitioning and the generation of optimized parallel code, and consequently the achievable performance and power consumption of applications from different domains. The Architecture oriented paraLlelization for high performance embedded Multicore systems using scilAb (ALMA) European project aims to bridge these hurdles through the introduction and exploitation of a Scilab-based toolchain which enables the efficient mapping of applications on multiprocessor platforms from a high level of abstraction. The holistic solution of the ALMA toolchain allows the complexity of both the application and the architecture to be hidden, which leads to better acceptance, reduced development cost, and shorter time-to-market. Driven by the technology restrictions in chip design, the end of exponential growth of clock speeds and an unavoidable increasing request of computing performance, ALMA is a fundamental step forward in the necessary introduction of novel computing paradigms and methodologies.


ACM Transactions in Embedded Computing Systems | 2013

Reliable and adaptive network-on-chip architectures for cyber physical systems

Diana Göhringer; L. Meder; Oliver Oey; Jürgen Becker

Reliability in embedded systems is crucial for many application domains. Especially, for safety critical application, as they can be found in the automotive and avionic domain, a high reliability has to be ensured. The technology in chip production undergoes a steady shrinking process from nowadays 25 nanometers. It is proven that coming technologies, which are much smaller, can have a higher defect rate after production, but also at runtime. The physical effects at runtime come from a higher susceptibility for radiation. Since the silicon die of a field programmable gate array (FPGA) includes a high amount of physical wiring, the radiation effect plays here a major role. Therefore, this article describes an approach of a reliable Network-on-Chip (NoC) which can be used for an FPGA-based system. The article describes the concept and the physical realization of this NoC and evaluates its reliability.


reconfigurable computing and fpgas | 2012

Adaptive multiclient network-on-chip memory core: hardware architecture, software abstraction layer, and application exploration

Diana Göhringer; L. Meder; Stephan Werner; Oliver Oey; Jürgen Becker; Michael Hübner

This paper presents the hardware architecture and the software abstraction layer of an adaptive multiclient Network-on-Chip (NoC) memory core. The memory core supports the flexibility of a heterogeneous FPGA-based runtime adaptive multiprocessor system called RAMPSoC. The processing elements, also called clients, can access the memory core via the Network-on-Chip (NoC). The memory core supports a dynamic mapping of an address space for the different clients as well as different data transfer modes, such as variable burst sizes. Therefore, two main limitations of FPGA-based multiprocessor systems, the restricted on-chip memory resources and that usually only one physical channel to an off-chip memory exists, are leveraged. Furthermore, a software abstraction layer is introduced, which hides the complexity of the memory core architecture and which provides an easy to use interface for the application programmer. Finally, the advantages of the novel memory core in terms of performance, flexibility, and user friendliness are shown using a real-world image processing application.


digital systems design | 2012

From Scilab to High Performance Embedded Multicore Systems: The ALMA Approach

Juergen Becker; Timo Stripf; Oliver Oey; Michael Huebner; Steven Derrien; Daniel Menard; Olivier Sentieys; Gerard K. Rauwerda; Kim Sunesen; Nikolaos Kavvadias; Kostas Masselos; George Goulas; Panayiotis Alefragis; Nikolaos S. Voros; Dimitrios Kritharidis; Nikolaos Mitas; Diana Goehringer

The mapping process of high performance embedded applications to todays multiprocessor system on chip devices suffers from a complex tool chain and programming process. The problem here is the expression of parallelism with a pure imperative programming language which is commonly C. This traditional approach limits the mapping, partitioning and the generation of optimized parallel code, and consequently the achievable performance and power consumption of applications from different domains. The Architecture oriented paraLlelization for high performance embedded Multicore systems using scilAb (ALMA) European project aims to bridge these hurdles through the introduction and exploitation of a Scilab-based toolchain which enables the efficient mapping of applications on multiprocessor platforms from high level of abstraction. This holistic solution of the toolchain allows the complexity of both the application and the architecture to be hidden, which leads to a better acceptance, reduced development cost, and shorter time-to-market. Driven by the technology restrictions in chip design, the end of exponential growth of clock speeds, and an unavoidable increasing request of computing performance, ALMA is a fundamental step forward in the necessary introduction of novel computing paradigms and methodologies.


design, automation, and test in europe | 2012

Virtualized on-chip distributed computing for heterogeneous reconfigurable multi-core systems

Stephan Werner; Oliver Oey; Diana Göhringer; Michael Hübner; Jürgen Becker

Efficiently managing the parallel execution of various application tasks onto a heterogeneous multi-core system consisting of a combination of processors and accelerators is a difficult task due to the complex system architecture. The management of reconfigurable multi-core systems which exploit dynamic and partial reconfiguration in order to, e.g. increase the number of processing elements to fulfill the performance demands of the application, is even more complicated. This paper presents a special virtualization layer consisting of one central server and several distributed computing clients to virtualize the complex and adaptive heterogeneous multi-core architecture and to autonomously manage the distribution of the parallel computation tasks onto the different processing elements.


computational science and engineering | 2012

A Compilation- and Simulation-Oriented Architecture Description Language for Multicore Systems

Timo Stripf; Oliver Oey; Thomas Bruckschloegl; Ralf Koenig; George Goulas; Panayiotis Alefragis; Nikolaos S. Voros; Jordy Potman; Kim Sunesen; Steven Derrien; Olivier Sentieys; Juergen Becker

Todays reconfigurable multicore architectures become more and more complex. They consist of several processing units, not necessarily identical, different interconnecting modules, memories and possibly other components. Programming such kind of architectures requires deep knowledge of the underlying hardware and is thus very time consuming and error prone. On the other hand, automated tool chains that target multicore architectures are typically tailored to one specific architecture type and require a platform-specific programming model. Within the EU FP7 project Architecture oriented paraLlelization for high performance embedded Multicore systems using scilAb (ALMA) we address this shortcoming by a flexible tool chain featuring platform-independence on the architecture level as well as on the programming model. Thus, the tool chain is kept retarget able by using a novel architecture description language (ADL) for multiprocessor system on chip devices. Applications are expressed using the Scilab programming language allowing the end user to develop optimized programs without specific knowledge of the target architectures. Thereby, the ADL guides the code generation of the integrated tool flow through coarse- and fine grain parallelism extraction, parallel code optimizations and multicore simulations.


international symposium on parallel and distributed processing and applications | 2014

A Hierarchical Architecture Description for Flexible Multicore System Simulation

Thomas Bruckschloegl; Oliver Oey; Michael Rückauer; Timo Stripf; Jürgen Becker

As processors and systems on chip in the embedded world increasingly become multicore, parallel programming remains a difficult, time-consuming and complicated task. End users who are not parallel programming experts have a need to exploit such processors and architectures, using high level programming languages, like Scilab or MATLAB. The ALMA toolset solves this problem: it takes Scilab code as input and produces parallel code for embedded multiprocessor systems on chip, using platform quasi-agnostic optimizations. The platform information is provided by an architecture description language designed for the purpose of a flexible system description as well as simulation. A hierarchical system description in combination with a parameterizable simulation environment allows fine-grained trade-offs between simulation performance and simulation accuracy.


reconfigurable computing and fpgas | 2013

A flexible implementation of the PSO algorithm for fine- and coarse-grained reconfigurable embedded systems

Michael Rueckauer; Daniel M. Muñoz; Timo Stripf; Oliver Oey; Carlos H. Llanos; Juergen Becker

The large execution times demanded for solving complex optimization problems in embedded systems is one of the main challenges in the field of engineering optimization. One solution is the acceleration by a specialized hardware implementation. However, this is coming along with a loss of flexibility especially for the realization of the application-specific fitness function. In this paper we present novel solutions for the flexible implementation of the Particle Swarm Optimization (PSO) algorithm by targeting the coarse-gained reconfigurable Kahrisma architecture. Effectiveness of the proposed solutions was demonstrated for benchmark test problems by numerical simulations achieved by Kahrisma and the MicroBlaze soft-core processor mapped on fine-grained reconfigurable technology using the Open Virtual Platform (OVP) simulator as well as an FPGA implementation. Convergence results demonstrate that the proposed solutions achieve the optimal points for different scenarios. Finally, execution time results demonstrate that the Kahrisma implementation with 4-issue width provides the required flexibility to design high performance embedded optimization systems.


applied reconfigurable computing | 2014

Profile-Guided Compilation of Scilab Algorithms for Multiprocessor Systems

Jürgen Becker; Thomas Bruckschloegl; Oliver Oey; Timo Stripf; George Goulas; Nick Raptis; Christos Valouxis; Panayiotis Alefragis; Nikolaos S. Voros; Christos Gogos

The expression of parallelism in commonly used programming languages is still a large problem when mapping high performance embedded applications to multiprocessor system on chip devices. The Architecture oriented paraLlelization for high performance embedded Multicore systems using scilAb (ALMA) European project aims to bridge these hurdles through the introduction and exploitation of a Scilab-based toolchain which enables the efficient mapping of applications on multiprocessor platforms from a high level of abstraction. To achieve maximum performance the toolchain supports iterative application parallelization using profile-guided application compilation. In this way, the toolchain will increase the quality and performance of a parallelized application from iteration to iteration. This holistic solution of the toolchain hides the complexity of both, the application and the architecture, which leads to a better acceptance, reduced development cost, and shorter time-to-market.

Collaboration


Dive into the Oliver Oey's collaboration.

Top Co-Authors

Avatar

Timo Stripf

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jürgen Becker

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Bruckschloegl

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Olivier Sentieys

Institut de Recherche en Informatique et Systèmes Aléatoires

View shared research outputs
Top Co-Authors

Avatar

Diana Göhringer

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Juergen Becker

French Institute for Research in Computer Science and Automation

View shared research outputs
Top Co-Authors

Avatar

Steven Derrien

Institut de Recherche en Informatique et Systèmes Aléatoires

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge