Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Svein-Olaf Hvasshovd is active.

Publication


Featured researches published by Svein-Olaf Hvasshovd.


extending database technology | 2006

Online, non-blocking relational schema changes

Jørgen Løland; Svein-Olaf Hvasshovd

A database schema should be able to evolve to reflect changes to the universe it represents. In existing systems, user transactions get blocked during complex schema transformations. Blocking user transactions is not an option in systems with very high availability requirements, like operational telecom databases. A non-blocking transformation framework is therefore needed. A method for performing non-blocking full outer join and split transformations, suitable for highly available databases, is presented in this paper. Only the log is used for change propagation, and this makes the method easy to integrate into existing DBMSs. Because the involved tables are not locked, the transformation may run as a low priority background process. As a result, the transformation has little impact on concurrent user transactions.


ieee international symposium on fault tolerant computing | 1999

Evaluating the effectiveness of fault tolerance in replicated database management systems

Maitrayi Sabaratnam; Øystein Torbjørnsen; Svein-Olaf Hvasshovd

Database management systems (DBMS) achieve high availability and fault tolerance usually by replication. However fault tolerance does not come for free. Therefore, DBMSs serving critical applications with real time requirements must find a trade of between fault tolerance cost and performance. The purpose of this study is two-fold. It evaluates the effectiveness of DBMS fault tolerance in the presence of corruption in database buffer cache, which poses serious threat to the integrity requirement of the DBMSs. The first experiment of this study evaluates the effectiveness of fault tolerance, and the fault impact on database integrity, performance, and availability on a replicated DBMS, ClustRa, in the presence of software faults that corrupt the volatile data buffer cache. The second experiment identifies the weak data structure components in the data buffer cache that give fatal consequences when corrupted, and suggest the need for some forms of guarding them individually or collectively.


database systems for advanced applications | 2007

The circular two-phase commit protocol

Heine Kolltveit; Svein-Olaf Hvasshovd

Distributed transactional systems require an atomic commitment protocol to preserve atomicity of the ACID properties. However, the industry leading standard, 2PC, is slow and adds a significant overhead to transaction processing. In this paper, a new atomic commitment protocol for main-memory primary-backup systems, C2PC, is proposed. It exploits replication to avoid disk-logging and performs the commit processing in a circular fashion. The analysis shows that C2PC has the same delay as 1PC, and reduces the total overhead compared to 2PC.


advances in databases and information systems | 2007

Preventing orphan requests by integrating replication and transactions

Heine Kolltveit; Svein-Olaf Hvasshovd

Replication is crucial to achieve high availability distributed systems. However, non-determinism introduces consistency problems between replicas. Transactions are very well suited to maintain consistency, and by integrating them with replication, support for nondeterministic execution in replicated environments can be achieved. This paper presents an approach where a passively replicated transaction manager is allowed to break replication transparency to abort orphan requests, thus handling non-determinism. A prototype implemented using existing open-source software, Jgroup/ARM and Jini, has been developed, and performance and failover tests have been executed. The results show that while this approach is possible, components specifically tuned for performance must be used to meet real-time requirements.


international parallel and distributed processing symposium | 2005

Supporting load balancing and efficient reorganization during system scaling

Feng Zhu; Xiaowei Sun; Betty Salzberg; Svein-Olaf Hvasshovd

Reorganization becomes constantly necessary for maintaining load balancing as distributed storage systems scale up and down. To support load balancing and efficient reorganization during system scaling, we propose a new hashing method called prime based hashing (PBH) that can be used for data allocation in large distributed systems. PBH distributes objects among storage units based on residues (congruence) of hash-transformed key values modulo prime numbers. PBH provides nearly perfect load balancing, distributes objects evenly and rebalances to preserve the even distribution as system scales. At the same time it facilitates cost-effective reorganization by minimizing data migration during system scaling. Locating an object in PBH is fast through low complexity computations, requiring only the knowledge of the total number of storage units. We also propose a local data clustering method to couple with PBH to make reorganization more efficient. Objects are clustered according to the order of migration so that only the part of the data that needs to be migrated is scanned. In addition, we show that by storing a small amount of pre-computed information, ordering of objects for clustering can be very efficient. We demonstrate through analysis and experiments the effectiveness of our algorithms.


database systems for advanced applications | 2008

Main memory commit processing: the impact of priorities

Heine Kolltveit; Svein-Olaf Hvasshovd

Distributed transaction systems require an atomic commitment protocol to preserve ACID properties. The overhead of commit processing is a significant part of the load on a distributed database. Here, we propose approaches where the overhead is reduced by prioritizing urgent messages and operations. This is done in the context of main memory primary-backup systems, and the proposed approaches is found to significantly reduce the response time as seen by the client. Also, by piggybacking messages on each other over the network, the throughput is increased. Simulation results show that performance can be significantly improved using this approach, especially for utilizations above 50%.


availability, reliability and security | 2008

Efficient High Availability Commit Processing

Heine Kolltveit; Svein-Olaf Hvasshovd

Distributed transaction systems require an atomic commitment protocol to preserve ACID properties. A commit protocol should add as little overhead as possible to avoid hampering performance. In this paper, dynamic coordinators are introduced. In main memory primary-backup systems, the approach significantly reduces the time spent during commit processing. The performance of such protocols must be properly evaluated to give system developers the information needed to make an educated choice between them. Thus, simulation results, verified by statistical analysis, are presented. The simulation results show that the performance can be significantly boosted by using optimizations and protocols especially designed for high availability main memory systems.


pacific rim international symposium on dependable computing | 1999

Cost of ensuring safety in distributed database management systems

Maitrayi Sabaratnam; Øystein Torbjørnsen; Svein-Olaf Hvasshovd

Generally, applications employing database management systems (DBMS) require that the integrity of the data stored in the database be preserved during normal operation as well as after crash recovery. Preserving database integrity and availability needs extra safety measures in the form of consistency checks. Increased safety measures inflict adverse effect on performance by reducing throughput and increasing response time. This may not be agreeable for some critical applications and thus, a tradeoff is needed. This study evaluates the cost of extra consistency checks introduced in the data buffer cache in order to preserve the database integrity, in terms of performance loss. In addition, it evaluates the improvement in error coverage and fault tolerance, and occurrence of double failures causing long unavailability, with the help of fault injection. The evaluation is performed on a replicated DBMS, ClustRa. The results show that the checksum overhead in a DBMS inflicted with a very high TPC-B-like workload caused a reduction in throughput up to 5%. The error detection coverage improved from 62% to 92%. Fault injection experiments shows that corruption in database image went down from 13% to 0%. This indicates that the applications that require high safety, but can afford up to 5% performance loss can adopt checksum mechanisms.


nordic conference on human-computer interaction | 2016

Where HCI meets ACI

Ilyena Hirskyj-Douglas; Janet C. Read; Oskar Juhlin; Heli Väätäjä; Patricia Pons; Svein-Olaf Hvasshovd

This one day workshop examines the interactions and the space between HCI (Human Computer Interaction) and ACI (Animal Computer Interaction) focusing on the transferability of methods and ideas between the two fields. The workshop will begin with short presentations followed by plenary discussions. The aim is to strengthen connected thinking whilst highlighting the exchangeable connecting methods from both ACI and HCI and their subfields including Child Computer Interaction (CCI) and Human Robot Interaction (HRI), discussing what these fields learn from each other with their similarities and differences mapped. The output of this workshop will be an initial mapping of the ACI and HCI fields interchange of methods and learning transferability as well as an advanced understanding of how the two fields are useful to each other.


international conference on networking | 2008

Main Memory Commit Protocols for Multiple Backups

Heine Kolltveit; Svein-Olaf Hvasshovd

Atomic commitment protocols are needed to preserve ACID properties of distributed transactional systems. The performance of such protocols is essential to avoid adding too much overhead to transaction processing. Also, many applications require levels of availability, which cannot be met by using only one backup source (disk or process). Combining these two requirements, this paper presents protocols for use with main memory primary-backup systems, where there are multiple backup processes for each primary. These protocols are evaluated by simulation and verified by statistical analysis. The results show that the best choice of protocol is not static, but varies with the transactional load of the system.

Collaboration


Dive into the Svein-Olaf Hvasshovd's collaboration.

Top Co-Authors

Avatar

Heine Kolltveit

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Maitrayi Sabaratnam

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Svein Erik Bratsberg

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jørgen Løland

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Rune Humborstad

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Øystein Grøvlen

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Feng Zhu

Northeastern University

View shared research outputs
Researchain Logo
Decentralizing Knowledge