Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alan Fekete is active.

Publication


Featured researches published by Alan Fekete.


ACM Transactions on Database Systems | 2005

Making snapshot isolation serializable

Alan Fekete; Dimitrios Liarokapis; Elizabeth J. O'Neil; Patrick E. O'Neil; Dennis E. Shasha

Snapshot Isolation (SI) is a multiversion concurrency control algorithm, first described in Berenson et al. [1995]. SI is attractive because it provides an isolation level that avoids many of the common concurrency anomalies, and has been implemented by Oracle and Microsoft SQL Server (with certain minor variations). SI does not guarantee serializability in all cases, but the TPC-C benchmark application [TPC-C], for example, executes under SI without serialization anomalies. All major database system products are delivered with default nonserializable isolation levels, often ones that encounter serialization anomalies more commonly than SI, and we suspect that numerous isolation errors occur each day at many large sites because of this, leading to corrupt data sometimes noted in data warehouse applications. The classical justification for lower isolation levels is that applications can be run under such levels to improve efficiency when they can be shown not to result in serious errors, but little or no guidance has been offered to application programmers and DBAs by vendors as to how to avoid such errors. This article develops a theory that characterizes when nonserializable executions of applications can occur under SI. Near the end of the article, we apply this theory to demonstrate that the TPC-C benchmark application has no serialization anomalies under SI, and then discuss how this demonstration can be generalized to other applications. We also present a discussion on how to modify the program logic of applications that are nonserializable under SI so that serializability will be guaranteed.


european conference on computer systems | 2013

MDCC: multi-data center consistency

Tim Kraska; Gene Pang; Michael J. Franklin; Samuel Madden; Alan Fekete

Replicating data across multiple data centers allows using data closer to the client, reducing latency for applications, and increases the availability in the event of a data center failure. MDCC (Multi-Data Center Consistency) is an optimistic commit protocol for geo-replicated transactions, that does not require a master or static partitioning, and is strongly consistent at a cost similar to eventually consistent protocols. MDCC takes advantage of Generalized Paxos for transaction processing and exploits commutative updates with value constraints in a quorum-based system. Our experiments show that MDCC outperforms existing synchronous transactional replication protocols, such as Megastore, by requiring only a single message round-trip in the normal operational case independent of the master-location and by scaling linearly with the number of machines as long as transaction conflict rates permit.


international conference on management of data | 2008

Serializable isolation for snapshot databases

Michael J. Cahill; Uwe Röhm; Alan Fekete

Many popular database management systems offer snapshot isolation rather than full serializability. There are well-known anomalies permitted by snapshot isolation that can lead to violations of data consistency by interleaving transactions that individually maintain consistency. Until now, the only way to prevent these anomalies was to modify the applications by introducing artificial locking or update conflicts, following careful analysis of conflicts between all pairs of transactions. This paper describes a modification to the concurrency control algorithm of a database management system that automatically detects and prevents snapshot isolation anomalies at runtime for arbitrary applications, thus providing serializable isolation. The new algorithm preserves the properties that make snapshot isolation attractive, including that readers do not block writers and vice versa. An implementation and performance study of the algorithm are described, showing that the throughput approaches that of snapshot isolation in most cases.


international conference on performance engineering | 2012

How a consumer can measure elasticity for cloud platforms

Sadeka Islam; Kevin Lee; Alan Fekete; Anna Liu

One major benefit claimed for cloud computing is elasticity: the cost to a consumer of computation can grow or shrink with the workload. This paper offers improved ways to quantify the elasticity concept, using data available to the consumer. We define a measure that reflects the financial penalty to a particular consumer, from under-provisioning (leading to unacceptable latency or unmet demand) or over-provisioning (paying more than necessary for the resources needed to support a workload). We have applied several workloads to a public cloud; from our experiments we extract insights into the characteristics of a platform that influence its elasticity. We explore the impact of the rules used to increase or decrease capacity.


very large data bases | 2013

Highly available transactions: virtues and limitations

Peter Bailis; Aaron Davidson; Alan Fekete; Ali Ghodsi; Joseph M. Hellerstein; Ion Stoica

To minimize network latency and remain online during server failures and network partitions, many modern distributed data storage systems eschew transactional functionality, which provides strong semantic guarantees for groups of multiple operations over multiple data items. In this work, we consider the problem of providing Highly Available Transactions (HATs): transactional guarantees that do not suffer unavailability during system partitions or incur high network latency. We introduce a taxonomy of highly available systems and analyze existing ACID isolation and distributed data consistency guarantees to identify which can and cannot be achieved in HAT systems. This unifies the literature on weak transactional isolation, replica consistency, and highly available systems. We analytically and experimentally quantify the availability and performance benefits of HATs---often two to three orders of magnitude over wide-area networks---and discuss their necessary semantic compromises.


ACM Transactions on Computer Systems | 2001

Specifying and using a partitionable group communication service

Alan Fekete; Nancy A. Lynch; Alexander A. Shvartsman

Group communication services are becoming accepted as effective building blocks for the construction of fault-tolerant distributed applications. Many specifications for group communication services have been proposed. However, there is still no agreement about what these specifications should say, especially in cases where the services are partitionable, i.e., where communication failures may lead to simultaneous creation of groups with disjoint memberships, such that each group is unware of the existence of any other group. In this paper, we present a new, succinct specification for a view-oriented partitionable group communication service. The service associates each message with a particular view of the group membership. All send and receive events for a message occur within the associated view. The service provides a total order on the messages within each view, and each processor receives a prefix of this order. Our specification separates safety requirements from performance and fault-tolerance requirements. The safety requirements are expressed by an abstract, global state machine. To present the performance and fault-tolerance requirements, we include failure-status input actions in the specification; we then give properties saying that consensus on the view and timely message delivery are guaranteed in an execution provided that the execution stabilizes to a situation in which the failure-status stops changing and corresponds to consistently partioned system. Because consensus is not required in every execution, the specification is not subject to the existing impossibility results for partionable systems. Our specification has a simple implementation, based on the membership algorithm of Christian and Schmuck. We show the utility of the specification by constructing an ordered-broadcast application, using an algorithm (based on algorithms of Amir, Dolev, Keidar, and others) that reconciles information derived from different instantiations of the group. The application manages the view-change activity to build a shared sequence of messages, i.e., the per-view total orders of the group service are combined to give a universal total order. We prove the correctness and analyze the performance and fault-tolerance of the resulting application.


IEEE Transactions on Software Engineering | 2005

Design-level performance prediction of component-based applications

Yan Liu; Ian Gorton; Alan Fekete

Server-side component technologies such as Enterprise JavaBeans (EJBs), .NET, and CORBA are commonly used in enterprise applications that have requirements for high performance and scalability. When designing such applications, architects must select suitable component technology platform and application architecture to provide the required performance. This is challenging as no methods or tools exist to predict application performance without building a significant prototype version for subsequent benchmarking. In this paper, we present an approach to predict the performance of component-based server-side applications during the design phase of software development. The approach constructs a quantitative performance model for a proposed application. The model requires inputs from an application-independent performance profile of the underlying component technology platform, and a design description of the application. The results from the model allow the architect to make early decisions between alternative application architectures in terms of their performance and scalability. We demonstrate the method using an EJB application and validate predictions from the model by implementing two different application architectures and measuring their performance on two different implementations of the EJB platform.


principles of distributed computing | 1996

Eventually-serializable data services

Alan Fekete; David Gupta; Victor Luchangco; Nancy A. Lynch; Alexander A. Shvartsman

We present a new specification for distributed data services that trade-off immediate consistency guarantees for improved system availability and efficiency, while ensuring the long-term consistency of the data. An eventually-serializable data service maintains the operations requested in a partial order that gravitates over time towards a total order. It provides clear and unambiguous guarantees about the immediate and long-term behavior of the system. To demonstrate its utility, we present an algorithm, based on one of Ladin, Liskov, Shrira, and Ghemawat [12], that implements this specification. Our algorithm provides the interface of the abstract service, and generalizes their algorithm by allowing general operations and greater flexibility in specifying consistency requirements. We also describe how to use this specification as a building block for applications such as directory services.


very large data bases | 2014

Coordination avoidance in database systems

Peter Bailis; Alan Fekete; Michael J. Franklin; Ali Ghodsi; Joseph M. Hellerstein; Ion Stoica

Minimizing coordination, or blocking communication between concurrently executing operations, is key to maximizing scalability, availability, and high performance in database systems. However, uninhibited coordination-free execution can compromise application correctness, or consistency. When is coordination necessary for correctness? The classic use of serializable transactions is sufficient to maintain correctness but is not necessary for all applications, sacrificing potential scalability. In this paper, we develop a formal framework, invariant confluence, that determines whether an application requires coordination for correct execution. By operating on application-level invariants over database states (e.g., integrity constraints), invariant confluence analysis provides a necessary and sufficient condition for safe, coordination-free execution. When programmers specify their application invariants, this analysis allows databases to coordinate only when anomalies that might violate invariants are possible. We analyze the invariant confluence of common invariants and operations from real-world database systems (i.e., integrity constraints) and applications and show that many are invariant confluent and therefore achievable without coordination. We apply these results to a proof-of-concept coordination-avoiding database prototype and demonstrate sizable performance gains compared to serializable execution, notably a 25-fold improvement over prior TPC-C New-Order performance on a 200 server cluster.


Journal of Systems and Software | 2012

Making sense of business process descriptions: An experimental comparison of graphical and textual notations

Avner Ottensooser; Alan Fekete; Hajo A. Reijers; Jan Mendling; Con Menictas

How effective is a notation in conveying the writers intent correctly? This paper identifies understandability of design notations as an important aspect which calls for an experimental comparison. We compare the success of university students in interpreting business process descriptions, for an established graphical notation (BPMN) and for an alternative textual notation (based on written use-cases). Because a design must be read by diverse communities, including technically trained professionals such as developers and business analysts, as well as end-users and stakeholders from a wider business setting, we used different types of participants in our experiment. Specifically, we included those who had formal training in process description, and others who had not. Our experiments showed significant increases by both groups in their understanding of the process from reading the textual model. This was not so for the graphical model, where only the trained readers showed significant increases. This finding points at the value of educating readers of graphical descriptions in that particular notation when they become exposed to such models in their daily work.

Collaboration


Dive into the Alan Fekete's collaboration.

Top Co-Authors

Avatar

Nancy A. Lynch

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Liu

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Ali Ghodsi

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dean Kuo

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Julian Jang

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Paul Greenfield

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Akon Dey

University of Sydney

View shared research outputs
Top Co-Authors

Avatar

Ion Stoica

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge