Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Broman is active.

Publication


Featured researches published by David Broman.


embedded software | 2013

Determinate composition of FMUs for co-simulation

David Broman; Christopher Brooks; Lev Greenberg; Edward A. Lee; Michael Masin; Stavros Tripakis; Michael Wetter

In this paper, we explain how to achieve deterministic execution of FMUs (Functional Mockup Units) under the FMI (Functional Mockup Interface) standard. In particular, we focus on co-simulation, where an FMU either contains its own internal simulation algorithm or serves as a gateway to a simulation tool. We give conditions on the design of FMUs and master algorithms (which orchestrate the execution of FMUs) to achieve deterministic co-simulation. We show that with the current version of the standard, these conditions demand capabilities from FMUs that are optional in the standard and rarely provided by an FMU in practice. When FMUs lacking these required capabilities are used to compose a model, many basic modeling capabilities become unachievable, including simple discrete-event simulation and variable-step-size numerical integration algorithms. We propose a small extension to the standard and a policy for designing FMUs that enables deterministic execution for a much broader class of models. The extension enables a master algorithm to query an FMU for the time of events that are expected in the future. We show that a model can be executed deterministically if all FMUs in the model are either memoryless or implement one of rollback or step-size prediction. We show further that such a model can contain at most one “legacy” FMU that is not memoryless and provides neither rollback nor step-size prediction.


international conference on computer design | 2012

A PRET microarchitecture implementation with repeatable timing and competitive performance

Isaac Liu; Jan Reineke; David Broman; Michael Zimmer; Edward A. Lee

We contend that repeatability of execution times is crucial to the validity of testing of real-time systems. However, computer architecture designs fail to deliver repeatable timing, a consequence of aggressive techniques that improve average-case performance. This paper introduces the Precision-Timed ARM (PTARM), a precision-timed (PRET) microarchitecture implementation that exhibits repeatable execution times without sacrificing performance. The PTARM employs a repeatable thread-interleaved pipeline with an exposed memory hierarchy, including a repeatable DRAM controller. Our benchmarks show an improved throughput compared to a single-threaded in-order five-stage pipeline, given sufficient parallelism in the software.


real time technology and applications symposium | 2014

FlexPRET: A processor platform for mixed-criticality systems

Michael Zimmer; David Broman; Christopher Shaver; Edward A. Lee

Mixed-criticality systems, in which multiple tasks of varying criticality execute on a single hardware platform, are an emerging research area in real-time embedded systems. High-criticality tasks require spatial and temporal isolation guarantees for independent verification, and the task set should efficiently utilize hardware resources. Hardware-based isolation is desirable but often underutilizes hardware resources, which can consist of multiple single-core, multicore, or multithreaded processors. We present FlexPRET, a processor designed specifically for mixed-criticality systems by allowing each task to make a trade-off between hardware-based isolation and efficient processor utilization. FlexPRET uses fine-grained multithreading with flexible scheduling and timing instructions to provide this functionality.


international conference on control applications | 2006

OpenModelica - A free open-source environment for system modeling, simulation, and teaching

Peter Fritzson; Peter Aronsson; Adrian Pop; Håkan Lundvall; Kaj Nyström; Levon Saldamli; David Broman; Anders Sandholm

Modelica is a modern, strongly typed, declarative, and object-oriented language for modeling and simulation of complex systems. This paper gives a quick overview of some aspects of the OpenModelica environment - an open-source environment for modeling, simulation, and development of Modelica applications. An introduction of the objectives of the environment is given, an overview of the architecture is outlined and a number of examples are illustrated.


IEEE Transactions on Education | 2012

The Company Approach to Software Engineering Project Courses

David Broman; Kristian Sandahl; Mohamed Abu Baker

Teaching larger software engineering project courses at the end of a computing curriculum is a way for students to learn some aspects of real-world jobs in industry. Such courses, often referred to as capstone courses, are effective for learning how to apply the skills they have acquired in, for example, design, test, and configuration management. However, these courses are typically performed in small teams, giving only a limited realistic perspective of problems faced when working in real companies. This paper describes an alternative approach to classic capstone projects, with the aim of being more realistic from an organizational, process, and communication perspective. This methodology, called the company approach, is described by intended learning outcomes, teaching/learning activities, and assessment tasks. The approach is implemented and evaluated in a larger Masters student course.


international conference on hybrid systems computation and control | 2015

Requirements for hybrid cosimulation standards

David Broman; Lev Greenberg; Edward A. Lee; Michael Masin; Stavros Tripakis; Michael Wetter

This paper defines a suite of requirements for future hybrid cosimulation standards, and specifically provides guidance for development of a hybrid cosimulation version of the Functional Mockup Interface (FMI). A cosimulation standard defines interfaces that enable diverse simulation tools to interoperate. Specifically, one tool defines a component that forms part of a simulation model in another tool. We focus on components with inputs and outputs that are functions of time, and specifically on mixtures of discrete events and continuous time signals. This hybrid mixture is not well supported by existing cosimulation standards, and specifically not by FMI 2.0, for reasons that are explained in this paper. The paper defines a suite of test components, giving a mathematical model of an ideal behavior, plus a discussion of practical implementation considerations. The discussion includes acceptance criteria by which we can determine whether a standard supports definition of each component. In addition, we define a set of test compositions that define requirements for coordination between components, including consistent handling of timed events.


real time technology and applications symposium | 2014

WCET-aware dynamic code management on scratchpads for Software-Managed Multicores

Yooseong Kim; David Broman; Jian Cai; Aviral Shrivastaval

Software Managed Multicore (SMM) architectures have advantageous scalability, power efficiency, and predictability characteristics, making SMM particularly promising for real-time systems. In SMM architectures, each core can only access its scratchpad memory (SPM); any access to main memory is done explicitly by DMA instructions. As a consequence, dynamic code management techniques are essential for loading program code from the main memory to SPM. Current state-of-the-art dynamic code management techniques for SMM architectures are, however, optimized for average-case execution time, not worst-case execution time (WCET), which is vital for hard real-time systems. In this paper, we present two novel WCET-aware dynamic SPM code management techniques for SMM architectures. The first technique is optimal and based on integer linear programming (ILP), whereas the second technique is a heuristic that is suboptimal, but scalable. Experimental results with benchmarks from Mälardalen WCET suite and MiBench suite show that our ILP solution can reduce the WCET estimates up to 80% compared to previous techniques. Furthermore, our heuristic can, for most benchmarks, find the same optimal mappings within one second on a 2GHz dual core machine.


Empirical Software Engineering | 2016

Automated bug assignment: Ensemble-based machine learning in large scale industrial contexts

Leif Jonsson; Markus Borg; David Broman; Kristian Sandahl; Sigrid Eldh; Per Runeson

Bug report assignment is an important part of software maintenance. In particular, incorrect assignments of bug reports to development teams can be very expensive in large software development projects. Several studies propose automating bug assignment techniques using machine learning in open source software contexts, but no study exists for large-scale proprietary projects in industry. The goal of this study is to evaluate automated bug assignment techniques that are based on machine learning classification. In particular, we study the state-of-the-art ensemble learner Stacked Generalization (SG) that combines several classifiers. We collect more than 50,000 bug reports from five development projects from two companies in different domains. We implement automated bug assignment and evaluate the performance in a set of controlled experiments. We show that SG scales to large scale industrial application and that it outperforms the use of individual classifiers for bug assignment, reaching prediction accuracies from 50 % to 89 % when large training sets are used. In addition, we show how old training data can decrease the prediction accuracy of bug assignment. We advice industry to use SG for bug assignment in proprietary contexts, using at least 2,000 bug reports for training. Finally, we highlight the importance of not solely relying on results from cross-validation when evaluating automated bug assignment.


real time technology and applications symposium | 2014

Relaxing the synchronous approach for mixed-criticality systems

Eugene Yip; Matthew M. Y. Kuo; Partha S. Roop; David Broman

Synchronous languages are widely used to design safety-critical embedded systems. These languages are based on the synchrony hypothesis, asserting that all tasks must complete instantaneously at each logical time step. This assertion is, however, unsuitable for the design of mixed-criticality systems, where some tasks can tolerate missed deadlines. This paper proposes a novel extension to the synchronous approach for supporting three levels of task criticality: life, mission, and non-critical. We achieve this by relaxing the synchrony hypothesis to allow tasks that can tolerate bounded or unbounded deadline misses. We address the issue of task communication between multi-rate, mixed-criticality tasks, and propose a deterministic lossless communication model. To maximize system utilization, we present a hybrid static and dynamic scheduling approach that executes schedulable tasks during slack time. Extensive benchmarking shows that our approach can schedule up to 15% more task sets and achieve an average of 5.38% better system utilization than the Early-Release EDF (ER-EDF) approach. Tasks are scheduled fairer under our approach and achieve consistently higher execution frequencies, but require more preemptions.


australian software engineering conference | 2009

Formal Semantics Based Translator Generation and Tool Development in Practice

Peter Fritzson; Adrian Pop; David Broman; Peter Aronsson

In this paper we report on a long-term research effort to develop and use efficient language implementation generators in practice. The generator is applied to a number of different languages, some of which are used for projects in industry. The used formal specification style is Operational Semantics, primarily in the form called Natural Semantics, represented and supported by a meta-language and tool called the Relational Meta Language (RML), which can generate efficient implementations in C, on par with hand-implemented code. Generating implementations from formal specifications are assumed to give advantages such as: high level descriptions, higher degree of correctness, and consistency between specification and implementation. To what extent can this be realized in practice? Does it scale to large language implementations? To answer some of these questions we have developed specifications of a range of languages: imperative, functional, object-oriented (Java), and equation-based (Modelica). The size of specifications range from half a page to large specifications of 60 000 lines. It turns out to be possible to generate efficient compilers, also for large languages. However, the performance of the generator tool and the user support of the development environment become increasingly important for large specifications. To satisfy such user needs the speed of the generator was increased a factor of ten to reduce turn-around time, and an Eclipse plug-in including a debugger were developed. For very large specifications, the structuring and modularity of the specification itself also become essential for performance and maintainability.

Collaboration


Dive into the David Broman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edward A. Lee

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge