Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Brugger is active.

Publication


Featured researches published by Christian Brugger.


Proceedings of the 2015 International Symposium on Memory Systems | 2015

Omitting Refresh: A Case Study for Commodity and Wide I/O DRAMs

Matthias Jung; Éder F. Zulian; Deepak M. Mathew; Matthias Herrmann; Christian Brugger; Christian Weis; Norbert Wehn

Dynamic Random Access Memories (DRAM) have a big impact on performance and contribute significantly to the total power consumption in systems ranging from mobile devices to servers. Up to half of the power consumption of future high density DRAM devices will be caused by refresh commands. Moreover, not only the refresh rate does depend on the device capacity, but it strongly depends on the temperature as well. In case of 3D integration of MPSoCs with Wide I/O DRAMs the power density and thermal dissipation are increased dramatically. Hence, in 3D-DRAM even more DRAM refresh operations are required. To master these challenges, clever DRAM refresh strategies are mandatory either on hardware or on software level using new or already available infrastructures and implementations, such as Partial Array Self Refresh (PASR) or Temperature Compensated Self Refresh (TCSR). In this paper, we show that for dedicated applications refresh can be disabled completely without or with negligible impact on the application performance. This is possible if it is assured that either the lifetime of the data is shorter than the currently required DRAM refresh period or if the application can tolerate bit errors to some degree in a given time window.


ieee conference on computational intelligence for financial engineering economics | 2014

Mixed precision multilevel Monte Carlo on hybrid computing systems

Christian Brugger; Christian de Schryver; Norbert Wehn; Steffen Omland; Mario Hefter; Klaus Ritter; Anton Kostiuk; Ralf Korn

Nowadays, high-speed computations are mandatory for financial and insurance institutes to survive in competition and to fulfill the regulatory reporting requirements that have just toughened over the last years. A majority of these computations are carried out on huge computing clusters, which are an ever increasing cost burden for the financial industry. There, state-of-the-art CPU and GPU architectures execute arithmetic operations with pre-defined precisions only, that may not meet the actual requirements for a specific application. Reconfigurable architectures like field programmable gate arrays (FPGAs) have a huge potential to accelerate financial simulations while consuming only very low energy by exploiting dedicated precisions in optimal ways. In this work we present a novel methodology to speed up multilevel Monte Carlo (MLMC) simulations on reconfigurable architectures. The idea is to aggressively lower the precisions for different parts of the algorithm without loosing any accuracy at the end. For this, we have developed a novel heuristic for selecting an appropriate precision at each stage of the simulation that can be executed with low costs at runtime. Further, we introduce a cost model for reconfigurable architectures and minimize the cost of our algorithm without changing the overall error. We consider the showcase of pricing Asian options in the Heston model. For this setup we improve one of the most advanced simulation methods by a factor of 3-9x on the same platform.


high performance computational finance | 2014

A systematic methodology for analyzing closed-form Heston pricer regarding their accuracy and runtime

Christian Brugger; Gongda Liu; Christian de Schryver; Norbert Wehn

Calibration methods are the heart of modeling any financial process. While for the Heston model (semi) closed-form solutions exist for calibrating to simple products, their evaluation involves complex functions and infinite integrals. So far these integrals can only be solved with time-consuming numerical methods. For that reason, calibration consumes a large portion of available compute power in the daily finance business and it is worth checking for the most optimal available methods with respect to runtime and accuracy.However, over the years more and more theoretical and practical subtleties have been revealed and today a large number of approaches are available, including dierent formulations of closed-formulas and various integration algorithms like quadrature or Fourier methods. Currently there is no clear indication which pricing method should be used for a specific calibration purpose with additional speed and accuracy constraints. With this publication we are closing this gap. We derive a novel methodology to systematically find the best methods for a well-defined accuracy target among a huge set of available methods. For a practical setup we study the available popular closed-form solutions and integration algorithms from literature. In total we compare 14 pricing methods, including adaptive quadrature and Fourier methods. For a target accuracy of 10-3 we show that static Gauss-Legendre are best on CPUs for the unrestricted parameter set. Further we show that for restricted Carr-Madan formulation the methods are 3.6x faster. We also show that Fourier methods are even better when pricing at least 10 options with the same maturity but dierent strikes.


design, automation, and test in europe | 2015

Reverse longstaff-schwartz american option pricing on hybrid CPU/FPGA systems

Christian Brugger; Javier Alejandro Varela; Norbert Wehn; Songyin Tang; Ralf Korn

In todays markets, high-speed and energy-efficient computations are mandatory in the financial and insurance industry. At the same time, the gradual convergence of highperformance computing with embedded systems is having a huge impact on the design methodologies, where dedicated accelerators are implemented to increase performance and energy efficiency. This paper follows this trend and presents a novel way to price high-dimensional American options using techniques of the embedded community. The proposed architecture targets heterogeneous CPU/FPGA systems, and it exploits the FPGA reconfiguration to deliver high-throughput. With a bit-true algorithmic transformation based on recomputation, it is possible to eliminate the memory bottleneck and access costs. The result is a pricing system that is 16x faster and 268x more energy-efficient than an optimized Intel CPU implementation.


field programmable logic and applications | 2014

HyPER: A runtime reconfigurable architecture for monte carlo option pricing in the Heston model

Christian Brugger; Christian de Schryver; Norbert Wehn

High-speed and energy-efficient computations are mandatory in the financial and insurance industry to survive in competition and meet the federal reporting requirements. On a hybrid CPU/FPGA system we propose a modular pricing engine and derive a novel algorithmic extension able to exploit online dynamic reconfiguration. The result is a high-performance and energy-efficient pricing system suitable for exotic option pricing in the state-of-the-art Heston market model. With the online reconfiguration extension our hybrid pricing system is nearly two orders of magnitude faster than high-end Intel CPUs, while consuming the same power.


advances in social networks analysis and mining | 2015

Exploiting Phase Transitions for the Efficient Sampling of the Fixed Degree Sequence Model

Christian Brugger; André Lucas Chinazzo; Alexandre Flores John; Christian de Schryver; Norbert Wehn; Andreas Spitz; Katharina Anna Zweig

Real-world network data is often very noisy and contains erroneous or missing edges. These superfluous and missing edges can be identified statistically by assessing the number of common neighbors of the two incident nodes. To evaluate whether this number of common neighbors, the so called co-occurrence, is statistically significant, a comparison with the expected co-occurrence in a suitable random graph model is required. For networks with a skewed degree distribution, including most real-world networks, it is known that the fixed degree sequence model, which maintains the degrees of nodes, is favourable over using simplified graph models that are based on an independence assumption. However, the use of a fixed degree sequence model requires sampling from the space of all graphs with the given degree sequence and measuring the co-occurrence of each pair of nodes in each of the samples, since there is no known closed formula for this statistic. While there exist log-linear approaches such as Markov chain Monte Carlo sampling, the computational complexity still depends on the length of the Markov chain and the number of samples, which is significant in large-scale networks. In this article, we show based on ground truth data that there are various phase transition-like tipping points that enable us to choose a comparatively low number of samples and to reduce the length of the Markov chains without reducing the quality of the significance test. As a result, the computational effort can be reduced by an order of magnitudes.


FPGA Based Accelerators for Financial Applications | 2015

Exploiting Mixed-Precision Arithmetics in a Multilevel Monte Carlo Approach on FPGAs

Steffen Omland; Mario Hefter; Klaus Ritter; Christian Brugger; Christian de Schryver; Norbert Wehn; Anton Kostiuk

Nowadays, high-speed computations are mandatory for financial and insurance institutes to survive in competition and to fulfill the regulatory reporting requirements that have just toughened over the last years. A majority of these computations are carried out on huge computing clusters, which are an ever increasing cost burden for the financial industry. There, state-of-the-art CPU and GPU architectures execute arithmetic operations with predefined precisions only, that may not meet the actual requirements for a specific application. Reconfigurable architectures like Field Programmable Gate Arrays (FPGAs) have a huge potential to accelerate financial simulations while consuming only very low energy by exploiting dedicated precisions in optimal ways. In this work we present a novel methodology to speed up Multilevel Monte Carlo (MLMC) simulations on reconfigurable architectures. The idea is to aggressively lower the precisions for different parts of the algorithm without loosing any accuracy at the end. For this, we have developed a novel heuristic for selecting an appropriate precision at each stage of the simulation that can be executed with low costs at runtime. Further, we introduce a cost model for reconfigurable architectures and minimize the cost of our algorithm without changing the overall error. We consider the showcase of pricing Asian options in the Heston model. For this setup we improve one of the most advanced simulation methods by a factor of 3–9× on the same platform.


international parallel and distributed processing symposium | 2012

RIVER: Reconfigurable Pre-Synthesized-Streaming Architecture for Signal Processing on FPGAs

Dominic Hillenbrand; Christian Brugger; Jie Tao; Shufan Yang; M. Balzer

We present a scalable run-time configurable and programmable signal processing architecture for real-time applications which covers a wide performance spectrum. Our approach goes beyond conventional special purpose signal processing engines. Scalability has multiple dimensions: on core- and network-level. We base our novel architecture on programmable components which can be re-combined and re-configured to match application specific requirements for signal processing tasks at run-time. Users of the RIVER architecture can use our pre-synthesized cores to avoid HDL-coding and lengthy FPGA translation. For evaluation we have mapped computational- and memory-intensive kernels to the RIVER architecture and achieved 250 GMACs which is significantly (1.6-2x) more than many high-end DSPs provide.


Concurrency and Computation: Practice and Experience | 2016

Precision-tuning and hybrid pricer for closed-form solution-based Heston calibration

Christian Brugger; Gongda Liu; Christian de Schryver; Norbert Wehn

Calibration methods are the heart of modeling any financial process. While for the Heston model (semi) closed‐form solutions exist for simple products, their evaluation involves complex functions and infinite integrals. So far, these integrals can only be solved with time‐consuming numerical methods. For that reason, calibration consumes a large portion of available compute power in the daily finance business.


reconfigurable computing and fpgas | 2015

Exploiting the brownian bridge technique to improve longstaff-schwartz american option pricing on FPGA systems

Javier Alejandro Varela; Christian Brugger; Christian de Schryver; Norbert Wehn; Songyin Tang; Steffen Omland

Risk analysis and management is a very compute intensive task that needs to be performed on a regular (daily) basis. FPGAs have already shown acceleration potential in financial applications with high energy efficiency. In this paper, we present a novel way to price multi-dimensional American options (highly involved in risk management) targeting heterogeneous CPU/FPGA systems. We demonstrate how an architectural limitation of the Longstaff-Schwartz algorithm is solved by means of an algorithmic transformation employing the Brownian Bridge technique. Based on this, we present a new pricing system on FPGAs that achieves a 2x improvement in runtime compared to the state-of-the-art solution in the same technology, with a maximum resources overhead of 15%. On top of that, our proposed architecture is 1.8x more energy efficient than the same reference.

Collaboration


Dive into the Christian Brugger's collaboration.

Top Co-Authors

Avatar

Norbert Wehn

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Christian de Schryver

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Javier Alejandro Varela

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Katharina Anna Zweig

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Gongda Liu

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ralf Korn

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Songyin Tang

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Steffen Omland

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexandre Flores John

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

André Lucas Chinazzo

Kaiserslautern University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge