Martin Radetzki
University of Stuttgart
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Martin Radetzki.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2010
Adán Kohler; Gert Schley; Martin Radetzki
The structural redundancy inherent to on-chip interconnection networks [networks on chip (NoC)] can be exploited by adaptive routing algorithms in order to provide connectivity even if network components are out of service due to faults, which will appear at an increasing rate with future chip technology nodes. This paper is based on a new, fine-grained functional fault model and a corresponding distributed fault diagnosis method that facilitate determining the fault status of individual NoC switches and their adjacent communication links. Whereas previous work on network fault-tolerance assume switches to be either available or fully out of service, we present a novel adaptive routing algorithm that employs the remaining functionality of partly defective switches. Using diagnostic information, transient faults are handled with a retransmission scheme that avoids the latency penalty of end-to-end repeat requests. Thereby, graceful degradation of NoC communication performance can be achieved even under high failure rates.
networks on chips | 2009
Adán Kohler; Martin Radetzki
Networks-on-Chips (NoCs) provide inherent structural redundancy of on-chip communication pathways. This redundancy can be exploited to maintain connectivity even if some components of an NoC exhibit faults which will appear at an increasing rate in future chip generations. Based on a fine-grained functional fault model, error-detecting circuitry, and distributed online fault diagnosis, we determine the fault status of NoC switches, including their adjacent links. The remaining functionality of partly defective switches is utilized by a modified deflection routing algorithm to achieve graceful degradation of packet throughput.
design, automation, and test in europe | 2008
Martin Radetzki; Rauf Salimi Khaligh
Simulation of transaction level models (TLMs) is an established embedded systems design technique. Its use cases include virtual prototyping for early software development, platform simulation for design space exploration, and reference modelling for verification. The different use cases mandate different trade-offs between simulation performance and accuracy. Therefore, multiple TLM abstraction layers have been defined of which one has to be chosen and integrated into the system model prior to simulation. In this contribution we present a modelling technique that allows covering several layers in a single model and switching between the layers at any time, in particular dynamically during simulation. This feature is employed to automatically adapt simulation accuracy to an appropriate level depending on the models state, leading to an improved trade-off between simulation performance and accuracy.
design, automation, and test in europe | 2010
Rauf Salimi Khaligh; Martin Radetzki
We present a set of modeling constructs accompanied by a high performance simulation kernel for accuracy adaptive transaction level models. In contrast to traditional, fixed accuracy TLMs, accuracy of adaptive TLMs can be changed during simulation to the level which is most suitable for a given use case and scenario. Ad-hoc development of adaptive models can result in complex models, and the implementation detail of adaptivity mechanisms can obscure the actual logic of a model. To simplify and enable systematic development of adaptive models, we have identified several mechanisms which are applicable to a wide variety of models. The proposed constructs relieve the modeler from low level implementation details of those mechanisms. We have developed an efficient, light-weight simulation kernel optimized for the proposed constructs, which enables parallel simulation of large models on widely available, low-cost multi-core simulation hosts. The modeling constructs and the kernel have been evaluated using industrial benchmark applications.
Journal of Systems Architecture | 2013
Thomas Canhao Xu; Gert Schley; Pasi Liljeberg; Martin Radetzki; Juha Plosila; Hannu Tenhunen
Due to technological limitations, manufacturing yield of vertical connections (Through Silicon Vias, TSVs) in 3D Networks-on-Chip (NoC) decreases rapidly when the number of TSVs grows. The adoption of 3D NoC design depends on the performance and manufacturing cost of the chip. This article presents methods for allocating and placing a minimal number of vertical links and the corresponding vertical routers to achieve specified performance goals. A second optimization step allows to maximize redundancy in order to deal with failing TSVs. Globally optimal solutions are determined for the first time for meshes up to 17x17 nodes in size. A 64-core 3D NoC is modeled based on state-of-the-art 2D chips. We present benchmark results using a cycle accurate full system simulator based on realistic workloads. Experiments show that under different workloads, an optimal placement with 25% of vertical connections achieved 81.3% of average network latency and 76.5% of energy delay product, compared with full layer-layer connection. The performance with 12.5% and 6.25% of vertical connections are also evaluated. Our analysis and experiment results provide a guideline for future 3D NoC design.
international symposium on quality electronic design | 2002
Ralf Seepold; Natividad Martínez Madrid; Andreas Vörg; Wolfgang Rosenstiel; Martin Radetzki; P. Neumann; J. Haase
The application and development of reusable components (intellectual property, IP) has become a regular part of modern design practices. The IP provider on one side and the IP integrator (user) on the other may be in the same company or separate participants in the microelectronic design market. In both cases, the transfer of IP remains a complex and time-consuming task. The qualification of IP gains a significant relevance for successful application and transfer of IP. This paper proposes an IP qualification methodology for an automated quality check that also incorporates current standards. Through embedding of the new concept into the regular design flow, IP transfer comes closer to an easy mix and match of virtual components. The presented approach has been validated during an industrial case study.
networks on chips | 2013
Marcus Eggenberger; Martin Radetzki
With continuing miniaturization, NoCs with 1024 nodes will become realistic around the year 2020. The design of such NoCs requires efficient simulation techniques to evaluate design alternatives and to validate functional correctness. The current state of the art, sequential simulation, will no longer provide acceptable simulation time. Parallel simulation exploiting multicore and multithreading capabilities of simulation computers is a potential solution. However, current parallel techniques suffer from limited scalability due to the need to synchronize simulation time and the access to shared data structures. This work presents a new approach based on an explicit ordering of simulation tasks so that a maximum of independent tasks are simulated between any dependent tasks. This enables efficient synchronization and, together with dynamic load balancing, reduces blocking time. A near-linear simulation speedup of up to 15.5 is achieved on a 16 core simulation machine.
international embedded systems symposium | 2009
Rauf Salimi Khaligh; Martin Radetzki
In recent years, transaction level modeling (TLM) has enabled designers to simulate complex embedded systems and SoCs, orders of magnitude faster than simulation at the RTL. The increasing complexity of the systems on one hand, and availability of low cost parallel processing resources on the other hand have motivated the development of parallel simulation environments for TLMs. The existing simulation environments used for parallel simulation of TLMs are intended for general discrete event models and do not take advantage of the specific properties of TLMs. The fine-grain synchronization and communication between simulators in these environments can become a major impediment to the efficiency of the simulation environment. In this work, we exploit the properties of temporally decoupled TLMs to increase the efficiency of parallel simulation. Our approach does not require a special simulation kernel. We have implemented a parallel TLM simulation framework based on the publicly available OSCI SystemC simulator. The framework is based on the communication interfaces proposed in the recent OSCI TLM 2 standard. Our experimental results show the reduced synchronization overhead and improved simulation performance.
design, automation, and test in europe | 2004
Andreas Vörg; Martin Radetzki; Wolfgang Rosenstiel
IP core reuse is necessary to overcome the design gap. Yet experience during IP integration has shown that risk is still considerably high when dealing with IPs. IP qualification provides IP providers and integrators with measurable quality characteristics that allow for high quality IP cores and to put buy decisions on a quantifiable basis. This paper presents unprecedented results that facilitate the comparison of the effectiveness of reusing qualified, digital soft IP to previous, immature reuse methods. An impressive reduction in IP integration effort, which is profitable for the IP customer, is demonstrated. Moreover, we show that the IP business can be profitable for the IP provider despite the additional qualification effort.
Journal of Information Science and Engineering | 1998
Martin Radetzki; Wolfram Putzke-R½ming; Wolfgang Nebel
Abstraction and reuse are keys to dealing with the increasing complexity of electronic systems. We apply object-oriented modeling to achieve more reuse and higher abstraction in hardware design. This requires an object-oriented hardware description language, preferably an extension of VHDL. Several variants of such OO-VHDL are currently being debated. We present our unified approach, Objective VHDL, which adds object-oriented features to the VHDL design entity as well as to the type system to provide maximum modeling power.