Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Häggander is active.

Publication


Featured researches published by Daniel Häggander.


international conference on engineering of complex computer systems | 2001

Quality attribute conflicts - experiences from a large telecommunication application

Daniel Häggander; Lars Lundberg; Jonas Matton

Modern telecommunication applications must provide high availability and performance. They must also be maintainable in order to reduce the maintenance cost and time-to-market for new versions. Previous studies have shown that the ambition to build maintainable systems may result in very poor performance. The authors evaluate an application called SDP pre-paid and show that the ambition to build systems with high performance and availability can lead to a complex software design with poor maintainability. We show that more than 85% of the SDP code is due to performance and availability optimizations. By implementing a SDP prototype with an alternative architecture, we show that the code size can be reduced with an order of magnitude by removing the performance and availability optimizations from the source code and instead using modern fault tolerant hardware and third party software. The performance and availability of the prototype is at least as good as the old SDP. The hardware and third party software cost is only 20-30% higher for the prototype. We also define three guidelines that help us to focus the additional hardware investments to the parts where it is really needed.


international parallel and distributed processing symposium | 2003

Recovery schemes for high availability and high performance distributed real-time computing

Lars Lundberg; Daniel Häggander; Kamilla Klonowska; Charlie Svahnberg

Clusters and distributed systems offer fault tolerance and high performance through load sharing, and are thus attractive in real-time applications. When all computers are up and running, we would like the load to be evenly distributed among the computers. When one or more computers-fail the must be redistributed. The redistribution is determined by the recovery scheme. The recovery scheme should keep the load as evenly distributed as possible even when the most unfavorable combinations of computers break down, i.e. we want to optimize the worst-case behavior. In this paper we define recovery schemes, which are optimal for a number of important cases. We also show that the problem of finding optimal recovery schemes corresponds to the mathematical problem of finding sequences of integers with minimal sum and for which all sums of subsequences are unique.


Lecture Notes in Computer Science | 2001

Conflicts and Trade-Offs between Software Performance and Maintainability

Lars Lundberg; Daniel Häggander; Wolfgang Diestelkamp

This chapter presents experiences from five large performance-demanding industrial applications. Performance and maintainability are two prioritized qualities in all of these systems. We have identified a number of conflicts between performance and maintainability. We have also identified three major techniques for handling these conflicts. (1) By defining guidelines for obtaining acceptable performance without seriously degrading maintainability. (2) By developing implementation techniques that guarantee acceptable performance for programs that are designed for maximum maintainability. (3) By using modern execution platforms that guarantee acceptable performance without sacrificing the maintainability aspect. We conclude that the relevant performance question is not only if the system meets its performance requirements using a certain software design on a certain platform. An equally interesting question is if the system can be made more maintainable by changing the software architecture and compensating this with modern hardware and/or optimized resource allocation algorithms and techniques.


international conference on parallel processing | 2001

A method for automatic optimization of dynamic memory management in C

Daniel Häggander; Per Lidén; Lars Lundberg

In C++, the memory allocator is often a bottleneck that severely limits performance and scalability on multiprocessor systems. The traditional solution is to optimize the C library memory allocation routines. An alternative is to attack the problem on the source code level, i.e. modify the applications source code. Such an approach makes it possible to achieve more efficient and customized memory management. To implement and maintain such source code optimizations is however both laborious and costly, since it is a manual procedure. Applications developed using object-oriented techniques, such as frameworks and design patterns, tend to use a great deal of dynamic memory to offer dynamic features. These features are mainly used for maintainability reasons, and temporal locality often characterizes the run-time behavior of the dynamic memory operations. We have implemented a pre-processor based method, named Amplify, which is a completely automated procedure optimizes (object-oriented) C++ applications to exploit the temporal locality in dynamic memory usage. Test results show that Amplify can obtain significant speed-up for synthetic applications and that it was useful for a commercial product.


asia pacific software engineering conference | 1999

Maintainability myth causes performance problems in SMP application

Daniel Häggander; PerOlof Bengtsson; Jan Bosch; Lars Lundberg

A challenge in software design is to find solutions that balance and optimize the quality attributes of the application. We present a case study of an application and the results of a design decision made on weak assumptions. The application has been assessed with respect to performance and maintainability. We present and evaluate an alternative design of a critical system component. Based on interviews with the involved designers we establish the design rationale. By analyzing the evaluation data of the two alternatives and the design rationale, we conclude that the design decision was based on a general assumption that an adaptable component design should increase the maintainability of the application. This case study is clearly a counter example to that assumption, and we therefore reject it as a myth. This study shows, however, that the myth is indeed responsible for the major performance problem in the application.


Journal of Systems and Software | 2001

A simple process for migrating server applications to SMP:s

Daniel Häggander; Lars Lundberg

Abstract A strong focus on quality attributes such as maintainability and flexibility has resulted in a number of new methodologies, e.g., object-oriented and component-based design, which can significantly limit the application performance. A major challenge is to find solutions that balance and optimize the quality attributes, e.g., symmetric multiprocessor (SMP) performance contra maintainability and flexibility. We have studied three large real-time telecommunication server applications developed by Ericsson. In all these applications maintainability is strongly prioritized. The applications are also very demanding with respect to performance due to real-time requirements on throughput and response time. SMP:s and multithreading are used in order to give these applications a high and scalable performance. Our main finding is that dynamic memory management is a major bottleneck in these types of applications. The bottleneck can, however, be removed using memory allocators optimized for SMP:s or by reducing the number of allocations. We found that the number of allocations can be significantly reduced by using alternative design strategies for maintainability and flexibility. Based on our experiences we have defined a simple guideline-based process with the aim of helping designers of server applications to establish a balance between SMP performance, maintainability and flexibility.


international conference on supercomputing | 1998

Bounding on the gain of optimizing data layout in vector processors

Lars Lundberg; Daniel Häggander

In vector processors, the number of memory banks (m) is generally larger than or equal to the memory access time divided with the processor cycle time. This ratio is denoted t, i.e. rn 2 t. Data is moved between the vector registers and the memory using long sequences of memory accesses for which the addresses are separated by a fixed distance called the stride. For some strides, the performance is seriously degraded due to memory bank conflicts. Many scientific applications are based on large matrices, and for such programs it is well known that the most unfavorable strides can be avoided by adding a number of dummy columns or by using hardware skewing. We present an optimal upper bound on the number of access conflicts when optimizing the data layout in this way. Programs are categorized according to their strides, and the worst-case behavior for each such category is given in a theorem. The result shows that for worst-case scenarios the number of conflicts increases rapidly when t grows, e.g. if we want to keep the worstcase behavior relatively constant when t grows from 6 to 10, we need to at least double the number of memory banks. The result is valid for skewed as well as for non-skewed memory systems.


Archive | 1999

QUALITY ATTRIBUTES IN SOFTWARE ARCHITECTURE DESIGN

Lars Lundberg; Jan Bosch; Daniel Häggander; PerOlof Bengtsson


iasted international conference on parallel and distributed computing and systems | 2000

Attacking the dynamic memory problem for SMPs

Daniel Häggander; Lars Lundberg


IADIS International Conference on Applied Computing | 2005

Evaluating Real-Time Credit-Control Server Architectures Implemented on a Standard Platform

Piotr Tomaszewski; Lars Lundberg; Jim Håkansson; Daniel Häggander

Collaboration


Dive into the Daniel Häggander's collaboration.

Top Co-Authors

Avatar

Lars Lundberg

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jan Bosch

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jim Håkansson

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

PerOlof Bengtsson

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Piotr Tomaszewski

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Charlie Svahnberg

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jonas Matton

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kamilla Klonowska

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Per Lidén

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wolfgang Diestelkamp

Blekinge Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge