Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dionisio de Niz is active.

Publication


Featured researches published by Dionisio de Niz.


real-time systems symposium | 2009

On the Scheduling of Mixed-Criticality Real-Time Task Sets

Dionisio de Niz; Karthik Lakshmanan; Ragunathan Rajkumar

The functional consolidation induced by the cost reduction trends in embedded systems can force tasks of different criticality (e.g. ABS Brakes with DVD) to share a processor and interfere with each other. These systems are known as mixed criticality systems. While traditional temporal isolation techniques prevent all inter-task interference, they waste utilization because they need to reserve for the absolute worst-case execution time (WCET) for all tasks. In many mixed-criticality systems the WCET is not only rare, but at times difficult to calculate, such as the time to localize all possible objects in an obstacle avoidance algorithm. In this situation it is more appropriate to allow the execution time to grow by stealing cycles from lower-criticality tasks. Even more crucial is the fact that temporal isolation techniques can stop a high-criticality task (that was overrunning its nomimal WCET) to allow a low-criticality task to run, making the former miss its deadline. We identify this as the criticality inversion problem. In this paper, we characterize the criticality inversion problem and present a new scheduling scheme called zero-slack scheduling that implements an alternative protection scheme we refer to as asymmetric protection. This protection only prevents interference from lower-criticality to higher-criticality tasks and improves the schedulable utilization. We use an offline algorithm with two parts: a zero-slack calculation algorithm, and a slack analysis algorithm. The zero-slack calculation algorithm minimizes the utilization needed by a task set by reducing the time low-criticality tasks are preempted by high-criticality ones. This algorithm can be used with priority-based preemptive schedulers (e.g. RMS, EDF). The slack analysis algorithm is specific for each priority-based preemptive scheduler and we develop and evaluated the one for RMS. We prove that this algorithm provides the same level of protection against criticality inversion as the best known priority assignment for this purpose, criticality as priority assignment (CAPA). We also prove that zero-slack RM provides the same level of schedulable utilization as RMS when all tasks have equal criticality levels. Finally, we present our implementation of the runtime enforcement mechanisms in Linux/RK to demonstrate its practicality.


real-time systems symposium | 2009

Coordinated Task Scheduling, Allocation and Synchronization on Multiprocessors

Karthik Lakshmanan; Dionisio de Niz; Ragunathan Rajkumar

Chip-multiprocessors represent a dominant new shift in the field of processor design. Better utilization of such technology in the real-time context requires coordinated approaches to task allocation, scheduling, and synchronization. In this paper, we characterize various scheduling penalties arising from multiprocessor task synchronization, including (i) blocking delays on global critical sections, (ii) back-to-back execution due to jitter from blocking, and (iii) multiple priority inversions due to remote resource sharing. We analyze the impact of these scheduling penalties under different execution control policies (ECPs) which compensate for the scheduling penalties incurred by tasks due to remote blocking. Subsequently, we develop a synchronization-aware task allocation algorithm for explicitly accommodating these global task synchronization penalties. The key idea of our algorithm is to bundle tasks that access a common shared resource and co-locate them, thereby transforming global resource sharing into local sharing. This approach reduces the above-mentioned penalties associated with remote task synchronization. Experimental results indicate that such a coordinated approach to scheduling, allocation, and synchronization yields significant benefits (as much as 50% savings in terms of required number of processing cores). An implementation of this approach is available as a part of our RT-MAP library, which uses the pthreads implementation of Linux-2.6.22.


real time technology and applications symposium | 2014

Bounding memory interference delay in COTS-based multi-core systems

Hyoseung Kim; Dionisio de Niz; Björn Andersson; Mark H. Klein; Onur Mutlu; Ragunathan Rajkumar

In commercial-off-the-shelf (COTS) multi-core systems, a task running on one core can be delayed by other tasks running simultaneously on other cores due to interference in the shared DRAM main memory. Such memory interference delay can be large and highly variable, thereby posing a significant challenge for the design of predictable real-time systems. In this paper, we present techniques to provide a tight upper bound on the worst-case memory interference in a COTS-based multi-core system. We explicitly model the major resources in the DRAM system, including banks, buses and the memory controller. By considering their timing characteristics, we analyze the worst-case memory interference delay imposed on a task by other tasks running in parallel. To the best of our knowledge, this is the first work bounding the request re-ordering effect of COTS memory controllers. Our work also enables the quantification of the extent by which memory interference can be reduced by partitioning DRAM banks. We evaluate our approach on a commodity multi-core platform running Linux/RK. Experimental results show that our approach provides an upper bound very close to our measured worst-case interference.


real time technology and applications symposium | 2001

Resource sharing in reservation-based systems

Dionisio de Niz; Luca Abeni; Saowanee Saewong; Ragunathan Rajkumar

In recent years, real-time operating systems have begun to support the resource reservation paradigm. This technique has proved to be very effective in providing QoS to both, real-time and legacy applications, ensuring that the temporal misbehavior of an application does not affect any other. However, resource sharing in a reservation system is still not well understood, and can break the temporal isolation property due to priority inversion. We address this problem, presenting some solution strategies that can be considered extensions to priority inheritance and priority ceiling protocol emulation for reservation systems.


international conference on distributed computing systems | 2010

Resource Allocation in Distributed Mixed-Criticality Cyber-Physical Systems

Karthik Lakshmanan; Dionisio de Niz; Ragunathan Rajkumar; Gabriel A. Moreno

Large-scale distributed cyber-physical systems will have many sensors/actuators (each with local micro-controllers), and a distributed communication/computing backbone with multiple processors. Many cyber-physical applications will be safety critical and in many cases unexpected workload spikes are likely to occur due to unpredictable changes in the physical environment. In the face of such overload scenarios, the desirable property in such systems is that the most critical applications continue to meet their deadlines. In this paper, we capture this mixed-criticality property by developing a formal overload-resilience metric called ductility. The generality of ductility enables it to evaluate any scheduling algorithm from the perspective of mixed-criticality cyber-physical systems. In distributed cyber-physical systems, this ductility is the result of both the task-to-processor packing (a.k.a bin packing) and the uniprocessor scheduling algorithms used. In this paper, we present a ductility-maximization packing algorithm to complement our previous work on mixed-criticality uniprocessor scheduling. Our packing algorithm, known as Compress-on-Overload Packing (COP) is a criticality-aware greedy bin-packing algorithm that maximizes the tolerance of high-criticality tasks to overloads. We compare the ductility of COP against the Worst-Fit Decreasing (WFD) bin-packing heuristic used traditionally for load balancing in distributed systems, and show that the performance of COP dominates WFD in the average case and can reach close to five times better ductility when resources are limited. Finally, we illustrate the practical use of COP in distributed cyber-physical systems using a radar surveillance application, and provide an overview of the entire process from assigning task criticality levels to evaluating its performance


International Journal of Embedded Systems | 2006

Partitioning bin-packing algorithms for distributed real-time systems

Dionisio de Niz; Raj Rajkumar

In this paper, we study extensions to bin packing algorithms to pack software modules into processors in real-time systems. We refer to this approach as Partitioning Bin-Packing. In this study, we analytically show that with partitioning bin-packing techniques the number of bins required by traditional bin packing can be reduced. We also evaluate heuristics to minimise both the number of processors (bins) needed and the network bandwidth required by communicating software modules that are partitioned across different processors. We find that a significant reduction in the number of bins is possible. Finally, different heuristics lead to different tradeoffs in processing vs. network needs.


real time technology and applications symposium | 2006

Model-Based Development of Embedded Systems: The SysWeaver Approach

Dionisio de Niz; Gaurav Bhatia; Raj Rajkumar

Model-based development of embedded real-time systems is aimed at elevating the level of abstraction at which these systems are designed, analyzed, validated, coded and tested. The use of a coherent multi-dimensional model across all development phases enables model-based design to generate systems that are correct by construction. Even some commercial support is available for code generation from higher-level models. However, such code generation capabilities are usually limited to uniprocessor targets and to a limited range of operating environments. SysWeaver (previously called “Time Weaver”) is a model-based development tool that includes a flexible “syscode” generation scheme for distributed real-time systems that can be easily tailored to a wide range of target platforms. In this paper, we present our work on creating an interoperable toolchain to automatically generate complete runtime code using models. The toolchain includes a simulation tool (Matlab) and its code generator (Embedded Coder) along with SysWeaver. In this chain, the functional aspects of the system are specified in Simulink, Matlab’s modeling language, and translated into a SysWeaver model to be enhanced with timing information, the target hardware model and its communication dependencies. The final runtime code is then generated, automatically integrating the functional code generated with Embedded Coder and SysWeaver’s syscode. This syscode includes OS interfacing and network communication code with predictable timing behavior that can be verified at design time. Experiments with multi-node targets with end-to-end timing constraints in an automotive system show that many aspects of syscode and functional code generation can be automated. To our knowledge, this is the first time that multi-node executables including communication messages, functional behaviors and para-functional properties have been automatically generated using a general platform-independent framework.


real time technology and applications symposium | 2006

Predictable Interrupt Management for Real Time Kernels over conventional PC Hardware

Luis Eduardo Leyva-del-Foyo; Pedro Mejía-Alvarez; Dionisio de Niz

In this paper we analyze the traditional model of interrupt management and its incapacity to incorporate reliability and the temporal predictability demanded on real-time systems. As a result of this analysis, we propose a model that integrates interrupts and tasks handling. We make a schedulability analysis to evaluate and distinguish the circumstances under which this integrated model improves the traditional model. The design of a flexible and portable kernel interrupt subsystem for this integrated model is presented. In addition, we present the rationale for the implementation of our design over conventional PC interrupt hardware and the analysis of its overhead. Finally, experimental results are conducted to demonstrate the deterministic behavior of our integrated model and to quantify its overhead.


real time technology and applications symposium | 2011

Mixed-Criticality Task Synchronization in Zero-Slack Scheduling

Karthik Lakshmanan; Dionisio de Niz; Ragunathan Rajkumar

Recent years have seen an increasing interest in the scheduling of mixed-criticality real-time systems. These systems are composed of groups of tasks with different levels of criticality deployed over the same processor(s). Such systems must be able to accommodate additional execution-time requirements that may occasionally be needed. When overload conditions develop, critical tasks must still meet their timing constraints at the expense of less critical tasks. Zero-slack scheduling algorithms are promising candidates for such systems. These algorithms guarantee that all tasks meet their deadlines when no overload occurs, and that criticality ordering is satisfied under overloads. Unfortunately, when mutually exclusive resources are shared across tasks, these guarantees are voided. Furthermore, the dual-execution modes of tasks in mixed-criticality systems violate the assumptions of traditional real-time synchronization protocols like PCP and hence the latter cannot be used directly. In this paper, we develop extensions to real-time synchronization protocols (Priority Inheritance and Priority Ceiling Protocol) that coordinate the mode changes of the zero-slack scheduler. We analyze the properties of these new protocols and the blocking terms they introduce. We maintain the deadlock avoidance property of our PCP extension, called the Priority and Criticality Ceiling Protocol (PCCP), and limit the blocking to only one critical section for each of the zero-slack scheduling execution modes. We also develop techniques to accommodate the blocking terms arising from synchronization, in calculating the zero-slack instants used by the scheduler. Finally, we conduct an experimental evaluation of PCCP. Our evaluation shows that PCCP is able to take advantage of the capacity of zero-slack schedulers to reclaim unused over-provisioning of resources that are only used in critical execution modes. This allows PCCP to accommodate larger blocking terms.


Proceedings of the 10th international workshop on Aspect-oriented modeling | 2007

Aspects in the industry standard AADL

Dionisio de Niz; Peter H. Feiler

Aspect-Oriented Modeling is aimed at reducing the complexity of models by separating its different concerns. In model-based development of embedded systems this separation of concerns is more important given the multiple non-functional concerns addressed by embedded systems. These concerns can include timeliness, fault-tolerance, and security to name a few. The Architecture Analysis and Design Language (AADL) is a standard architecture description language to design and evaluate software architectures for embedded systems already in use by a number of organizations around the world. In this paper we discuss our current effort to extend the language to include new features for separation of concerns. These features not only include constructs to describe design choices but also routines to verify the proper combination of constructs from different concerns. This verification includes techniques and tools from the formal methods arena integrated into the AADL development tool providing a seamless design flow. We believe that work in this direction is fundamental to tackle the potential combinatorial explosion problem of verifying the merging of multiple concerns into a final system.

Collaboration


Dive into the Dionisio de Niz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Björn Andersson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sagar Chaki

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Mark H. Klein

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Lutz Wrage

Software Engineering Institute

View shared research outputs
Top Co-Authors

Avatar

Hyoseung Kim

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Raj Rajkumar

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Peter H. Feiler

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Gabriel A. Moreno

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

David Garlan

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge