Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Björn Andersson is active.

Publication


Featured researches published by Björn Andersson.


computational science and engineering | 2013

Coordinated Bank and Cache Coloring for Temporal Protection of Memory Accesses

Noriaki Suzuki; Hyoseung Kim; Dionisio de Niz; Björn Andersson; Lutz Wrage; Mark H. Klein; Ragunathan Rajkumar

In commercial-off-the-shelf (COTS) multi-core systems, the execution times of tasks become hard to predict because of contention on shared resources in the memory hierarchy. In particular, a task running in one processor core can delay the execution of another task running in another processor core. This is due to the fact that tasks can access data in the same cache set shared among processor cores or in the same memory bank in the DRAM memory (or both). Such cache and bank interference effects have motivated the need to create isolation mechanisms for resources accessed by more than one task. One popular isolation mechanism is cache coloring that divides the cache into multiple partitions. With cache coloring, each task can be assigned exclusive cache partitions, thereby preventing cache interference from other tasks. Similarly, bank coloring allows assigning exclusive bank partitions to tasks. While cache coloring and some bank coloring mechanisms have been studied separately, interactions between the two schemes have not been studied. Specifically, while memory accesses to two different bank colors do not interfere with each other at the bank level, they may interact at the cache level. Similarly, two different cache colors avoid cache interference but may not prevent bank interference. Therefore it is necessary to coordinate cache and bank coloring approaches. In this paper, we present a coordinated cache and bank coloring scheme that is designed to prevent cache and bank interference simultaneously. We also developed color allocation algorithms for configuring a virtual memory system to support our scheme which has been implemented in the Linux kernel. In our experiments, we observed that the execution time can increase by 60% due to inter-task interference when we use only cache coloring. Our coordinated approach can reduce this figure down to 12% (an 80% reduction).


international conference on principles of distributed systems | 2012

Analyzing Global-EDF for Multiprocessor Scheduling of Parallel Tasks

Björn Andersson; Dionisio de Niz

Consider the problem of scheduling a set of constrained-deadline sporadic real-time tasks on a multiprocessor where (i) all processors are identical, (ii) each task is characterized by its execution requirement, its deadline and its minimum inter-arrival time, (iii) each task generates a (potentially infinite) sequence of jobs and (iv) the execution requirement of a job and its potential for parallel execution is described by one or many stages with a stage having one or many segments such that all segments in a stage have the same execution requirement and segments in the same stage are permitted to execute in parallel and a segment is only allowed to start execution if all segments of previous stages have finished execution. We present a schedulability test for such a system where tasks are scheduled with global-EDF. This schedulability test has a resource-augmentation bound of two, meaning that if it is possible for a task set to meet deadlines (not necessarily with global-EDF) then our schedulability test guarantees that all deadlines are met when tasks are scheduled with global-EDF, assuming that the system analyzed with our schedulability test is provided processors of twice the speed.


euromicro conference on real-time systems | 2012

Non-preemptive Scheduling with History-Dependent Execution Time

Björn Andersson; Sagar Chaki; Dionisio de Niz; Brian Dougherty; Russell Kegley; Jules White

Consider non-preemptive fixed-priority scheduling of arbitrary-deadline sporadic tasks on a single processor assuming that the execution time of a job J depends on the actual schedule (sequence) of jobs executed before J. We present exact schedulability analysis for such a system.


ACM Transactions in Embedded Computing Systems | 2014

Provably Good Task Assignment for Two-Type Heterogeneous Multiprocessors Using Cutting Planes

Björn Andersson; Gurulingesh Raravi

Consider scheduling of real-time tasks on a multiprocessor where migration is forbidden. Specifically, consider the problem of determining a task-to-processor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct types of processors. For this problem, we propose a new algorithm, LPC (task assignment based on solving a Linear Program with Cutting planes). The algorithm offers the following guarantee: for a given task set and a platform, if there exists a feasible task-to-processor assignment, then LPC succeeds in finding such a feasible task-to-processor assignment as well but on a platform in which each processor is 1.5 × faster and has three additional processors. For systems with a large number of processors, LPC has a better approximation ratio than state-of-the-art algorithms. To the best of our knowledge, this is the first work that develops a provably good real-time task assignment algorithm using cutting planes.


Proceedings of the 20th International Conference on Real-Time and Network Systems | 2012

Real-time scheduling with resource sharing on uniform multiprocessors

Gurulingesh Raravi; Vincent Nélis; Björn Andersson

Consider the problem of scheduling a set of implicit-deadline sporadic tasks to meet all deadlines on a uniform multiprocessor platform where each task may access at most one of ρ shared resources and at most once by each job of that task. The resources have to be accessed in a mutually exclusive manner. We propose an algorithm, GIS-vpr, which offers the guarantee that if a task set is schedulable to meet deadlines by an optimal task assignment scheme that allows a task to migrate only when it accesses or releases a resource, then our algorithm also meets the deadlines with the same restriction on the task migration, if given processors 4 + 6ρ times as fast. The proposed algorithm, by design, limits the number of migrations per job to at most two. To the best of our knowledge, this is the first result for resource sharing on uniform multiprocessors with proven performance guarantee.


ACM Sigbed Review | 2016

Evaluating the average-case performance penalty of bandwidth-like interfaces

Björn Andersson

Many solutions for composability and compositionality rely on specifying the interface for a component using bandwidth. Some previous works specify period (P) and budget (Q) as an interface for a component. Q/P provides us with a bandwidth (the share of a processor that this component may request); P specifies the time-granularity of the allocation of this processing capacity. Other works add another parameter deadline which can help to provide tighter bounds on how this processing capacity is distributed. Yet other works use the parameters α and Δ where α is the bandwidth and Δ specifies how smoothly this bandwidth is distributed. It is known [4] that such bandwidth-like interfaces carry a cost: there are tasksets that could be guaranteed to be schedulable if tasks were scheduled directly on the processor, but with bandwidth-like interfaces, it is impossible to guarantee the taskset to be schedulable. And it is also known that this penalty can be infinite, i.e., the use of bandwidth-like interfaces may require the use of a processor that has a speed that is k times faster, and one can show this for any k. This brings the question: What is the average-case performance penalty of bandwidth-like interfaces? This paper addresses this question. We answer the question by randomly generating tasksets and then for each of these tasksets, compute a lower bound on how much faster a processor needs to be when a bandwidth-like scheme is used. We do not consider any specific bandwidth-like scheme; instead, we derive an expression that states a lower bound on how much faster a processor needs to be when a bandwidth-like scheme is used. For the distributions considered in this paper, we find that (i) the experimental results depend on the experimental setup, (ii) this lower bound on the penalty was never larger than 4.0, (iii) for one experimental setup, for each taskset, it was greater than 2.4, (iv) the histogram of this penalty appears to be unimodal, and (v) for implicit-deadline sporadic tasks, this lower bound on the penalty was exactly 1.


AIAA Infotech @ Aerospace | 2015

COTS Multicore Processors in Avionics Systems: Challenges and Solutions

Dionisio de Niz; Björn Andersson; Lutz Wrage

In this paper we discuss the challenges introduced by the shared memory hierarchy of multicore processors to the predictability of real-time system. In particular, the shared memory induces unpredictable time delays that can make real-time tasks miss their deadlines. We then present the techniques we developed to create cache and memory partitions using page coloring techniques that minimizes the memory delays and make them predictable. To allocate these partitions we present a coordinated private partition allocation algorithm. Given that the number of partitions that is possible to create on COTS hardware can be small we also present a timing analysis for shared bank partitions. These solutions are finally evaluated on a small avionics model problem using a ball-following control algorithm. This problem allows us to verify the effectiveness of our partitioning mechanisms as they are able to guarantee the timing of the algorithm that cannot be preserved without it.


ACM Transactions in Embedded Computing Systems | 2018

Schedulability Analysis of Tasks with Corunner-Dependent Execution Times

Björn Andersson; Hyoseung Kim; Dionisio de Niz; Mark H. Klein; Ragunathan Rajkumar; John P. Lehoczky

Consider fixed-priority preemptive partitioned scheduling of constrained-deadline sporadic tasks on a multiprocessor. A task generates a sequence of jobs and each job has a deadline that must be met. Assume tasks have Corunner-dependent execution times; i.e., the execution time of a job J depends on the set of jobs that happen to execute (on other processors) at instants when J executes. We present a model that describes Corunner-dependent execution times. For this model, we show that exact schedulability testing is co-NP-hard in the strong sense. Facing this complexity, we present a sufficient schedulability test, which has pseudo-polynomial-time complexity if the number of processors is fixed. We ran experiments with synthetic software benchmarks on a quad-core Intel multicore processor with the Linux/RK operating system and found that for each task, its maximum measured response time was bounded by the upper bound computed by our theory.


runtime verification | 2017

Combining Symbolic Runtime Enforcers for Cyber-Physical Systems

Björn Andersson; Sagar Chaki; Dionisio de Niz

The problem of composing multiple, possibly conflicting, runtime enforcers for a cyber-physical system (CPS) is considered. A formal definition of utility-agnostic and utility-maximizing CPS enforcers is presented, followed by an algorithm to combine multiple enforcers, and resolve their conflicts based on a design-time prioritization. To implement this combination in an efficient manner, enforcers are encoded symbolically using SMT formulas, and the combination is reduced to a set of SMT satisfiability and optimization operations. Further performance gains are achieved by using the SMT solvers incrementally. The approach is validated via experiments in an indoor area with Parrot minidrones. The incremental enforcer combination is shown to achieve an order of magnitude speedup, and no deadline misses.


design, automation, and test in europe | 2017

Mixed-criticality processing pipelines

Dionisio de Niz; Björn Andersson; Hyoseung Kim; Mark H. Klein; Linh Thi Xuan Phan; Raj Rajkumar

While a number of schemes exist for mixed-criticality scheduling in a single processor setting, no solution exists to cover the industry need for end-to-end scheduling across multiple processors in a pipeline. In this paper, we present an end-to-end zero-slack rate-monotonic scheme (ZSRM) based on real-time pipelines, called the ZSRM pipeline scheduler, that addresses this need. Under ZSRM, each task is associated with a parameter called zero-slack instant, and whenever a higher-criticality job has not finished at its zero-slack instant relative to its arrival time, all jobs of lower criticality are suspended to meet the deadline of the higher-criticality job. We develop a new schedulability test and algorithm for computing the zero-slack instants of tasks scheduled across a pipeline.

Collaboration


Dive into the Björn Andersson's collaboration.

Top Co-Authors

Avatar

Dionisio de Niz

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Hyoseung Kim

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Mark H. Klein

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John P. Lehoczky

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Lutz Wrage

Software Engineering Institute

View shared research outputs
Top Co-Authors

Avatar

Sagar Chaki

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge