Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Wieder is active.

Publication


Featured researches published by Alexander Wieder.


symposium on cloud computing | 2011

Incoop: MapReduce for incremental computations

Pramod Bhatotia; Alexander Wieder; Rodrigo Rodrigues; Umut A. Acar; Rafael Pasquin

Many online data sets evolve over time as new entries are slowly added and existing entries are deleted or modified. Taking advantage of this, systems for incremental bulk data processing, such as Googles Percolator, can achieve efficient updates. To achieve this efficiency, however, these systems lose compatibility with the simple programming models offered by non-incremental systems, e.g., MapReduce, and more importantly, requires the programmer to implement application-specific dynamic algorithms, ultimately increasing algorithm and code complexity. In this paper, we describe the architecture, implementation, and evaluation of Incoop, a generic MapReduce framework for incremental computations. Incoop detects changes to the input and automatically updates the output by employing an efficient, fine-grained result reuse mechanism. To achieve efficiency without sacrificing transparency, we adopt recent advances in the area of programming languages to identify the shortcomings of task-level memoization approaches, and to address these shortcomings by using several novel techniques: a storage system, a contraction phase for Reduce tasks, and an affinity-based scheduling algorithm. We have implemented Incoop by extending the Hadoop framework, and evaluated it by considering several applications and case studies. Our results show significant performance improvements without changing a single line of application code.


principles of distributed computing | 2010

Brief announcement: modelling MapReduce for optimal execution in the cloud

Alexander Wieder; Pramod Bhatotia; Ansley Post; Rodrigo Rodrigues

We describe a model for MapReduce computations that can be used to optimize the increasingly complex choice of resources that cloud customers purchase.


Proceedings of the 4th International Workshop on Large Scale Distributed Systems and Middleware | 2010

Conductor: orchestrating the clouds

Alexander Wieder; Pramod Bhatotia; Ansley Post; Rodrigo Rodrigues

Cloud computing enables customers to access virtually unlimited resources on demand and without any fixed upfront cost. However, the commoditization of computing resources imposes new challenges in how to manage them: customers of cloud services are no longer restricted to the resources they own, but instead choose from a variety of different services offered by different providers, and the impact of these choices on price and overall performance is not always clear. Furthermore, having to take into account new cloud products and services, the cost of recovering from faults, or price fluctuations due to spot markets makes the picture even more unclear. This position paper highlights a series of challenges that must be overcome in order to allow customers to better lever-age cloud resources. We also make the case for a system called Conductor that automatically manages resources in cloud computing to meet user-specifiable optimization goals, such as minimizing monetary cost or completion time. Finally, we discuss some of the challenges we will face in building such a system.


real-time systems symposium | 2013

On Spin Locks in AUTOSAR: Blocking Analysis of FIFO, Unordered, and Priority-Ordered Spin Locks

Alexander Wieder; Björn B. Brandenburg

Motivated by the widespread use of spin locks in embedded multiprocessor real-time systems, the worst-case blocking in spin locks is analyzed using mixed-integer linear programming. Four queue orders and two preemption models are studied: (i) FIFO-ordered spin locks, (ii) unordered spin locks, (iii) priority-ordered spin locks with unordered tie-breaking, and (iv) priority-ordered spin locks with FIFO-ordered tie-breaking, each analyzed assuming both preempt able and non-preempt able spinning. Of the eight lock types, seven have not been analyzed in prior work. Concerning the sole exception (non-preempt able FIFO spin locks), the new analysis is asymptotically less pessimistic and typically much more accurate since no critical section is accounted for more than once. The eight lock types are empirically compared in schedulability experiments. While the presented analysis is generic in nature and applicable to real-time systems in general, it is specifically motivated by the recent inclusion of spin locks into the AUTOSAR standard, and four concrete suggestions for an improved AUTOSAR spin lock API are derived from the results.


international symposium on industrial embedded systems | 2013

Efficient partitioning of sporadic real-time tasks with shared resources and spin locks

Alexander Wieder; Björn B. Brandenburg

Partitioned fixed-priority scheduling is widely used in embedded multiprocessor real-time systems due to its simplicity and low runtime overheads. However, it fundamentally requires a static mapping of tasks to processors to be determined. Optimal task set partitioning is known to be NP-hard, and the situation is further aggravated when limited resources (such as I/O ports, co-processors, buffers, etc.) must be shared among the tasks. Partitioning heuristics are much faster to compute, but may fail to find a valid mapping even if one exists. In practice, such inefficiencies can be addressed by over-provisioning processors (i.e., by using more and faster processors than strictly required), albeit at the expense of increased space, weight, and power (SWaP) requirements. This work makes two contributions towards the efficient mapping of real-time tasks that share resources protected by spin locks. First, an Integer Linear Programming (ILP) formulation of the problem is presented, which, while computationally expensive, is efficient in the sense that it will find a valid assignment if one exists, thereby minimizing processor requirements. This ILP formulation is the first optimal solution to the mapping problem in the presence of spin locks. Second, a new resource-aware partitioning heuristic is introduced, which, while not optimal, is efficient in the sense that it easily scales to large problem instances. Notably, the proposed heuristic is much simpler than prior approaches, parameter-free, and shown to perform well for a wide range of workloads.


real-time systems symposium | 2015

Global Real-Time Semaphore Protocols: A Survey, Unified Analysis, and Comparison

Maolin Yang; Alexander Wieder; Björn B. Brandenburg

All major real-time suspension-based locking protocols (or semaphore protocols) for global fixed-priority scheduling are reviewed and a new, unified response-time analysis framework applicable to all protocols is proposed. The newly proposed analysis, based on linear programming, is shown to be clearly preferable compared to all prior conventional approaches. Based on the new analysis, all protocols are directly compared with each other in a large-scale schedulability study. Interestingly, the Priority Inheritance Protocol (PIP) and the Flexible Multiprocessor Locking Protocol (FMLP), which are the two oldest and simplest of the considered protocols, are found to perform best.


Proceedings of the 4th International Workshop on Large Scale Distributed Systems and Middleware | 2010

Reliable data-center scale computations

Pramod Bhatotia; Alexander Wieder; Rodrigo Rodrigues; Flavio Junqueira; Benjamin Reed

Neither of the two broad classes of fault models considered by traditional fault tolerance techniques --- crash and Byzantine faults --- suit the environment of systems that run in todays data centers. On the one hand, assuming Byzantine faults is considered overkill due to the assumption of a worst-case adversarial behavior, and the use of other techniques to guard against malicious attacks. On the other hand, the crash fault model is insufficient since it does not capture non-crash faults that may result from a variety of unexpected conditions that are commonplace in this setting. In this paper, we present the case for a more practical approach at handling non-crash (but non-adversarial) faults in data-center scale computations. In this context, we discuss how such problem can be tackled for an important class of data-center scale systems: systems for large-scale processing of data, with a particular focus on the Pig programming framework. Such an approach not only covers a significant fraction of the processing jobs that run in todays data centers, but is potentially applicable to a broader class of applications.


real-time systems symposium | 2014

On the Complexity of Worst-Case Blocking Analysis of Nested Critical Sections

Alexander Wieder; Björn B. Brandenburg

Accurately bounding the worst-case blocking for finite job sets, a special case of the classic sporadic task model of recurrent real-time systems, using either nested FIFO-or priority-ordered locks on multiprocessors is NP-hard. These intractability results are obtained with reductions from the Multiple-Choice Matching problem. The reductions are quite general and do not depend on (1) whether the locks are spin-or suspension-based, or (2) whether global or partitioned scheduling is used, or (3) which scheduling policy is employed (as long as it is work-conserving). Further, we show that, for a special case in which the blocking analysis problem is NP-hard for FIFO- and priority-ordered locks, the problem for unordered spin locks with nested critical sections can be answered in polynomial time by solving a reach ability problem on a suitably constructed graph, although (or rather, because) unordered locks do not offer any acquisition-order guarantees. Finally, we identify several challenging open problems, pertaining both to circumventing the hardness results and to classifying the inherent difficulty of the problem more precisely.


real time systems symposium | 2016

A Blocking Bound for Nested FIFO Spin Locks

Alessandro Biondi; Björn B. Brandenburg; Alexander Wieder

Bounding worst-case blocking delays due to lock contention is a fundamental problem in the analysis of multiprocessor real-time systems. However, virtually all fine-grained (i.e., non-asymptotic) analyses published to date make a simplifying (but impractical) assumption: critical sections must not be nested. This paper overcomes this fundamental limitation and presents the first fine-grained blocking bound for nested non-preemptive FIFO spin locks under partitioned fixed-priority scheduling. To this end, a new analysis method is introduced, based on a graph abstraction that reflects all possible resource conflicts and transitive delays.


Taylor and Francis | 2014

Incremental MapReduce Computations

Pramod Bhatotia; Alexander Wieder; Umut A. Acar; Rodrigo Rodrigues

1 0.1 Abstract Distributed processing of large data sets is an area that received much attention from researchers and practitioners over the last few years. In this context, several proposals exist that leverage the observation that data sets evolve over time, and as such there is often a substantial overlap between the input to consecutive runs of a data processing job. This allows the programmers of these systems to devise an efficient logic to update the output upon an input change. However, most of these systems lack compatibility existing models and require the programmer to implement an application-specific dynamic algorithm, which increases algorithm and code complexity. In this chapter, we describe our previous work on building a platform called Incoop, which allows for running MapReduce computations incrementally and transparently. Incoop detects changes between two files that are used as inputs to consecutive MapReduce jobs, and efficiently propagates those changes until the new output is produced. The design of Incoop is based on memoizing the results of previously run tasks, and reusing these results whenever possible. Doing this efficiently introduces several technical challenges that are overcome with novel concepts, such as a large-scale storage system that efficiently computes deltas between two inputs, a Contraction phase to break up the work of the Reduce phase, and an affinity-based scheduling algorithm. This chapter presents the motivation and design of Incoop, as well as a complete evaluation using several application benchmarks. Our results show significant performance improvements without changing a single line of application code. Distributed processing of large data sets has become an important task in the life of various companies and organizations, for whom data analysis is an important vehicle to improve the way they operate. This area has attracted a lot of attention from both researchers and practitioners over the last few years, particularly after the introduction of the MapReduce paradigm for large-scale parallel data processing [19]. 2 CONTENTS A usual characteristic of the data sets that are provided as inputs to large-scale data processing jobs is that they do not vary dramatically over time. Instead, the same job is often invoked consecutively with small changes in this input from one run to the next. For instance, researchers have reported that the ratio between old and new data when processing consecutive web crawls may range from 10 to 1000X [28]. Motivated by this observation, there have been several proposals for large-scale …

Collaboration


Dive into the Alexander Wieder's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Umut A. Acar

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rafael Pasquin

Federal University of Uberlandia

View shared research outputs
Top Co-Authors

Avatar

Alessandro Biondi

Sant'Anna School of Advanced Studies

View shared research outputs
Researchain Logo
Decentralizing Knowledge