Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sebastian Fischmeister is active.

Publication


Featured researches published by Sebastian Fischmeister.


Biomedical Instrumentation & Technology | 2009

Plug-and-Play for Medical Devices: Experiences from a Case Study

Sebastian Fischmeister; Julian M. Goldman; Insup Lee; Robert Trausmuth

Medical devices are pervasive throughout modern healthcare, but each device works on its own and in isolation. Interoperable medical devices would lead to clear benefits for the care provider and the patient, such as more accurate assessment of the patient’s health and safety interlocks that would enable error-resilient systems. The Center for Integration of Medicine & Innovative Technology (www.CIMIT.org) sponsors the Medical Device Plug-and-Play Interoperability program (www.MDPnP.org), which is leading the development and adoption of standards for medical device interoperability. Such interoperable medical devices will lead to increased patient safety and enable new treatment options, and the aim of this project is to show the benefits of interoperable and interconnected medical devices.


IEEE Transactions on Industrial Informatics | 2010

Time-Aware Instrumentation of Embedded Software

Sebastian Fischmeister; Patrick Lam

Software instrumentation is a key technique in many stages of the development process. It is particularly important for debugging embedded systems. Instrumented programs produce data traces which enable the developer to locate the origins of misbehaviors in the system under test. However, producing data traces incurs runtime overhead in the form of additional computation resources for capturing and copying the data. The instrumentation may therefore interfere with the systems timing and perturb its behavior. In this work, we propose an instrumentation technique for applications with temporal constraints, specifically targeting background/foreground or cyclic executive systems. Our framework permits reasoning about space and time and enables the composition of software instrumentations. In particular, we propose a definition for trace reliability, which enables us to instrument real-time applications which aggressively push their time budgets. Using the framework, we present a method with low perturbation by optimizing the number of insertion points and trace buffer size with respect to code size and time budgets. Finally, we apply the theory to two concrete case studies: we instrument the OpenEC firmware for the keyboard controller of the One Laptop Per Child project, as well as an implementation of a flash file system.


formal methods | 2011

Sampling-based runtime verification

Borzoo Bonakdarpour; Samaneh Navabpour; Sebastian Fischmeister

The literature of runtime verification mostly focuses on event-triggered solutions, where a monitor is invoked by every change in the state of the system and evaluates properties of the system. This constant invocation introduces two major drawbacks to the system under scrutiny at run time: (1) significant overhead and (2) unpredictability. To circumvent the latter drawback, in this paper, we introduce a time-triggered approach, where the monitor frequently takes samples from the system to analyze the systems health. We propose formal semantics of sampling-based monitoring and discuss how to optimize the sampling period using minimum auxiliary memory. We show that such optimization is NP-complete and consequently introduce a mapping to Integer Linear Programming. Experiments on benchmark applications show that our approach introduces bounded overhead and effectively reduces involvement of the monitor at run time using negligible auxiliary memory.


IEEE Transactions on Industrial Informatics | 2009

Hardware Acceleration for Conditional State-Based Communication Scheduling on Real-Time Ethernet

Sebastian Fischmeister; Robert Trausmuth; Insup Lee

Distributed real-time applications implement distributed applications with timeliness requirements. Such systems require a deterministic communication medium with bounded communication delays. Ethernet is a widely used commodity network with many appliances and network components and represents a natural fit for real-time application; unfortunately, standard Ethernet provides no bounded communication delays. Conditional state-based communication schedules provide expressive means for specifying and executing with choice points, while staying verifiable. Such schedules implement an arbitration scheme and provide the developer with means to fit the arbitration scheme to the application demands instead of requiring the developer to tweak the application to fit a predefined scheme. An evaluation of this approach as software prototypes showed that jitter and execution overhead may diminish the gains. This work successfully addresses this problem with a synthesized soft processor. We present results around the development of the soft processor, the design choices, and the measurements on throughput and robustness.


IEEE Transactions on Computers | 2007

A Verifiable Language for Programming Real-Time Communication Schedules

Sebastian Fischmeister; Oleg Sokolsky; Insup Lee

Distributed hard real-time systems require predictable communication at the network level and verifiable communication behavior at the application level. At the network level, communication between nodes must be guaranteed to happen within bounded time and one common approach is to restrict the network access by enforcing a time-division multiple access (TDMA) schedule. At the application level, the applications communication behavior should be verified to ensure that the application uses the predictable communication in the intended way. Network code is a domain-specific programming language to write a predictable verifiable distributed communication for distributed real-time applications. In this paper, we present the syntax and semantics of network code, how we can implement different scheduling policies, and how we can use tools such as model checking to formally verify the properties of network code programs. We also present an implementation of a runtime system for executing network code on top of RTLinux and measure the overhead incurred from the runtime system.


Lecture Notes in Computer Science | 2001

Evaluating the Security of Three Java-Based Mobile Agent Systems

Sebastian Fischmeister; Giovanni Vigna; Richard A. Kemmerer

The goal of mobile agent systems is to provide a distributed computing infrastructure supporting applications whose components can move between different execution environments. The design and implementation of mechanisms to relocate computations requires a careful assessment of security issues. If these issues are not addressed properly, mobile agent technology cannot be used to implement real-world applications. This paper describes the initial steps of a research effort to design and implement security middleware for mobile code systems in general and mobile agent systems in particular. This initial phase focused on understanding and evaluating the security mechanisms of existing mobile agent systems. The evaluation was performed by deploying several mobile agents systems in a testbed network, implementing attacks on the systems, and evaluating the results. The long term goal for this research is to develop guidelines for the security analysis of mobile agent systems and to determine if existing systems provide the security abstractions and mechanisms needed to develop real-world applications.


foundations of software engineering | 2013

RiTHM: a tool for enabling time-triggered runtime verification for C programs

Samaneh Navabpour; Yogi Joshi; Wallace Wu; Shay Berkovich; Ramy Medhat; Borzoo Bonakdarpour; Sebastian Fischmeister

We introduce the tool RiTHM (Runtime Time-triggered Heterogeneous Monitoring). RiTHM takes a C program under inspection and a set of LTL properties as input and generates an instrumented C program that is verified at run time by a time-triggered monitor. RiTHM provides two techniques based on static analysis and control theory to minimize instrumentation of the input C program and monitoring intervention. The monitors verification decision procedure is sound and complete and exploits the GPU many-core technology to speedup and encapsulate monitoring tasks.


Real-time Systems | 2013

Implementation and evaluation of global and partitioned scheduling in a real-time OS

Giovani Gracioli; Antônio Augusto Fröhlich; Rodolfo Pellizzoni; Sebastian Fischmeister

In this work, we provide an experimental comparison between Global-EDF and Partitioned-EDF, considering the run-time overhead of a real-time operating system (RTOS). Recent works have confirmed that OS implementation aspects, such as the choice of scheduling data structures and interrupt handling mechanisms, impact real-time schedulability as much as scheduling theoretic aspects. However, these studies used real-time patches applied into a general-purpose OS. By measuring the run-time overhead of an RTOS designed from scratch, we show how close the schedulability ratio of task sets is to the theoretical hard real-time schedulability tests. Moreover, we show how a well-designed object-oriented RTOS allows code reuse of scheduling components (e.g., thread, scheduling criteria, and schedulers) and easy real-time scheduling extensions. We compare our RTOS to a real-time patch for Linux in terms of the task set schedulability ratio of several generated task sets. In some cases, Global-EDF considering the overhead of the RTOS is superior to Partitioned-EDF considering the overhead of the patched Linux, which clearly shows how different OSs impact hard real-time schedulers.


languages, compilers, and tools for embedded systems | 2010

Sampling-based program execution monitoring

Sebastian Fischmeister; Yanmeng Ba

For its high overall cost during product development, program debugging is an important aspect of system development. Debugging is a hard and complex activity, especially in time-sensitive systems which have limited resources and demanding timing constraints. System tracing is a frequently used technique for debugging embedded systems. A specific use of system tracing is to monitor and debug control-flow problems in programs. However, it is difficult to implement because of the potentially high overhead it might introduce to the system and the changes which can occur to the system behavior due to tracing. To solve the above problems, in this work, we present a sampling-based approach to execution monitoring which specifically helps developers debug time-sensitive systems such as real-time applications. We build the system model and propose three theorems to determine the sampling period in different scenarios. We also design seven heuristics and an instrumentation framework to extend the sampling period which can reduce the monitoring overhead and achieve an optimal tradeoff between accuracy and overhead introduced by instrumentation. Using this monitoring framework, we can use the information extracted through sampling to reconstruct the system state and execution paths to locate the deviation.


theory and applications of satisfiability testing | 2014

Impact of Community Structure on SAT Solver Performance

Zack Newsham; Vijay Ganesh; Sebastian Fischmeister; Gilles Audemard; Laurent Simon

Modern CDCL SAT solvers routinely solve very large industrial SAT instances in relatively short periods of time. It is clear that these solvers somehow exploit the structure of real-world instances. However, to-date there have been few results that precisely characterise this structure. In this paper, we provide evidence that the community structure of real-world SAT instances is correlated with the running time of CDCL SAT solvers. It has been known for some time that real-world SAT instances, viewed as graphs, have natural communities in them. A community is a sub-graph of the graph of a SAT instance, such that this sub-graph has more internal edges than outgoing to the rest of the graph. The community structure of a graph is often characterised by a quality metric called Q. Intuitively, a graph with high-quality community structure (high Q) is easily separable into smaller communities, while the one with low Q is not. We provide three results based on empirical data which show that community structure of real-world industrial instances is a better predictor of the running time of CDCL solvers than other commonly considered factors such as variables and clauses. First, we show that there is a strong correlation between the Q value and Literal Block Distance metric of quality of conflict clauses used in clause-deletion policies in Glucose-like solvers. Second, using regression analysis, we show that the the number of communities and the Q value of the graph of real-world SAT instances is more predictive of the running time of CDCL solvers than traditional metrics like number of variables or clauses. Finally, we show that randomly-generated SAT instances with 0.05 ≤ Q ≤ 0.13 are dramatically harder to solve for CDCL solvers than otherwise.

Collaboration


Dive into the Sebastian Fischmeister's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Insup Lee

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Madhukar Anand

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hany Kashif

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar

Ramy Medhat

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge