Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joerg Henkel is active.

Publication


Featured researches published by Joerg Henkel.


design, automation, and test in europe | 2007

Efficient code density through look-up table compression

Talal Bonny; Joerg Henkel

Code density is a major requirement in embedded system design since it not only reduces the need for the scarce resource memory but also implicitly improves further important design parameters like power consumption and performance. Within this paper we introduce a novel and efficient hardware-supported approach that belongs to the group of statistical compression schemes as it is based on canonical Huffman coding. In particular, our scheme is the first to also compress the necessary Look-up Tables that can become significant in size if the application is large and/or high compression is desired. Our scheme optimizes the number of generated look-up tables to improve the compression ratio. In average, we achieve compression ratios as low as 49% (already including the overhead of the lookup tables). Thereby, our scheme is entirely orthogonal to approaches that take particularities of a certain instruction set architecture into account. We have conducted evaluations using a representative set of applications and have applied it to three major embedded processor architectures, namely ARM, MIPS and PowerPC


international conference on vlsi design | 2011

Self-Immunity Technique to Improve Register File Integrity Against Soft Errors

Hussam Amrouch; Joerg Henkel

Continuous shrinking in feature size, increasing power density etc. increase the vulnerability of microprocessors against soft errors even in terrestrial applications. The register file is one of the essential architectural components where soft errors can be very mischievous because errors may rapidly spread from there throughout the whole system. Thus, register files are recognized as one of the major concerns when it comes to reliability. This paper introduces Self-Immunity, a technique that improves the integrity of the register file with respect to soft errors. Based on the observation that a certain number of register bits are not always used to represent a value stored in a register. This paper deals with the difficulty to exploit this obvious observation to enhance the register file integrity against soft errors. We show that our technique can reduce the vulnerability of the register file considerably while exhibiting smaller overhead in terms of area and power consumption compared to state-of-the-art in register file protection.


great lakes symposium on vlsi | 2006

Using Lin-Kernighan algorithm for look-up table compression to improve code density

Talal Bonny; Joerg Henkel

The presented work uses code compression to improve the design efficiency of an embedded system. In particular, we present a method and architecture for compressing the so-called Look-up Tables that are necessary for the de-compression process. No other work has yet focused on minimizing the Look-up Tables that, as we show, have a significant impact on the total overhead of a hardware-based decompression scheme. We introduce a novel and very efficient hardware-supported approach based on Canonical Huffman Coding. Using the Lin-Kernighan algorithm we reduce the Look-up Table size by up to 45%. As a result, we achieve all-over compression ratios as low as 45% (already including the overhead of the Look-up Tables). Thereby, our scheme is entirely orthogonal to approaches that take particularities of a certain instruction set architecture into account, meaning that compression could be further improved. Factoring in the orthogonality, our scheme is the basis for not-yet-achieved efficiency in hardware-supported compression schemes. We have conducted evaluations using a representative set (in terms of size and application domain) of applications and have applied it to three major embedded processor architectures, namely ARM, MIPS and PowerPC. The hardware evaluation shows no performance penalty.


design, automation, and test in europe | 2007

Instruction trace compression for rapid instruction cache simulation

Andhi Janapsatya; Aleksandar Ignjatovic; Sri Parameswaran; Joerg Henkel

Modern application specific instruction set processors (ASIPs) have customizable caches, where the size, associativity and line size can all be customized to suit a particular application. To find the best cache size suited for a particular embedded system, the applications) is/are executed, traces obtained, and caches simulated. Typically, program trace files can range from a few megabytes to several gigabytes. Simulation of cache performance using large program trace files is a time consuming process. In this paper, a novel instruction cache simulation methodology that can operate directly on a compressed program trace file without the need for decompression is presented. This feature allowed our simulation methodology to have an average speed up of 9.67 times compared to the existing state of the art tool (Dinero IV cache simulator), for a range of applications from the Mediabench suite


design, automation, and test in europe | 2004

Distributed multimedia system design: a holistic perspective

Radu Marculescu; Massoud Pedram; Joerg Henkel

Multimedia systems play a central part in many human activities. Due to the significant advances in the VLSI technology, there is an increasing demand for portable multimedia appliances capable of handling advanced algorithms required in all forms of communication. Over the years, we have witnessed a steady move from standalone (or desktop) multimedia to deeply distributed multimedia systems. Whereas desktop-based systems are mainly optimized based on the performance constraints, power consumption is the key design constraint for multimedia devices that draw their energy from batteries. The overall goal of successful design is then to find the best mapping of the target multimedia application onto the architectural resources, while satisfying an imposed set of design constraints (e.g. minimum power dissipation, maximum performance) and specified QoS metrics (e.g. end-to-end latency, jitter, loss rate) which directly impact the media quality. This paper addresses a few fundamental issues that make the design process particularly challenging and offers a holistic perspective towards a coherent design methodology.


international conference on embedded computer systems architectures modeling and simulation | 2012

Adaptive processor architecture - invited paper

Michael Huebner; Diana Goehringer; Carsten Tradowsky; Joerg Henkel; Jürgen Becker

This paper introduces a novel methodology to adapt the microarchitecture of a processor at run-time. The goal is to tailor the internal architecture to the requirements of an application and the data to be processed. The latter parameter is normally not known at design time. This leads to the development of more general purpose processors which are capable to handle the data to be processed in any case. With the novel approach which keeps the microarchitecture of a processor flexible, the processor can start as a general purpose device and end up with a specific parameterization, comparable with application specific processor architectures. Furthermore, the increased degree of freedom which is enabled through the approach for a novel quality of processors is described.


ACM Sigda Newsletter | 2010

What is adaptive computing

Joerg Henkel; Lars Bauer

Adaptive computing refers to the capability of a computing system to adapt one or more of its properties (e.g. performance) during runtime. There are diverse reasons of why it is advantageous for a computing system to adapt during runtime and there are various enabling techniques and paradigms that allow a computing system to perform such an adaptation. In the following, we limit our explanations of adaptive computing systems to the newest advances in embedded computing systems. Often, reconfigurable computing is referred to as adaptive computing. Actually, it corresponds to one of the key paradigms that along with application-specific instruction set processors enable adaptive computing. We will shortly discuss state-of-the-art in both areas.


international conference on vlsi design | 2003

Specification and design of multi-million gate SOCs

Ramesh Chandra; Joerg Henkel; Preeti Ranjan Panda; Sri Parameswaran

Summary form only given. Recent advances in semiconductor technology have made it possible to integrate multi million transistors on a single chip; design and verification teams face several challenges managing the complexity. Some of these challenges include, specification and verification at the functional level, closing early on the system level architecture, extensive simulations of the hardware and software components and finalizing the path to implementation of the entire SOC. This tutorial covers the state-of-the-art in specification, design and verification techniques.


CODES '02 10th International Symposium on Hardware/Software Codesign | 2002

Proceedings of the tenth international symposium on Hardware/software codesign

Joerg Henkel; Xiaobo Sharon Hu; Rajesh K. Gupta; Sri Parameswaran


international symposium on vlsi design, automation and test | 2014

Dark Silicon — A thermal perspective

Joerg Henkel

Collaboration


Dive into the Joerg Henkel's collaboration.

Top Co-Authors

Avatar

Sri Parameswaran

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aleksandar Ignjatovic

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Andhi Janapsatya

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vojin G. Oklobdzija

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carsten Tradowsky

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge