Featured Researches

Operating Systems

Dependency Graph Approach for Multiprocessor Real-Time Synchronization

Over the years, many multiprocessor locking protocols have been designed and analyzed. However, the performance of these protocols highly depends on how the tasks are partitioned and prioritized and how the resources are shared locally and globally. This paper answers a few fundamental questions when real-time tasks share resources in multiprocessor systems. We explore the fundamental difficulty of the multiprocessor synchronization problem and show that a very simplified version of this problem is NP -hard in the strong sense regardless of the number of processors and the underlying scheduling paradigm. Therefore, the allowance of preemption or migration does not reduce the computational complexity. For the positive side, we develop a dependency-graph approach, that is specifically useful for frame-based real-time tasks, in which all tasks have the same period and release their jobs always at the same time. We present a series of algorithms with speedup factors between 2 and 3 under semi-partitioned scheduling. We further explore methodologies and tradeoffs of preemptive against non-preemptive scheduling algorithms and partitioned against semi-partitioned scheduling algorithms. The approach is extended to periodic tasks under certain conditions.

Read more
Operating Systems

Design and Implementation of Modified Fuzzy based CPU Scheduling Algorithm

CPU Scheduling is the base of multiprogramming. Scheduling is a process which decides order of task from a set of multiple tasks that are ready to execute. There are number of CPU scheduling algorithms available, but it is very difficult task to decide which one is better. This paper discusses the design and implementation of modified fuzzy based CPU scheduling algorithm. This paper present a new set of fuzzy rules. It demonstrates that scheduling done with new priority improves average waiting time and average turnaround time.

Read more
Operating Systems

Design and Performance Evaluation of A New Proposed Fittest Job First Dynamic Round Robin(FJFDRR) Scheduling Algorithm

In this paper, we have proposed a new variant of Round Robin scheduling algorithm by executing the processes according to the new calculated Fit Factor f and using the concept of dynamic time quantum. We have compared the performance of our proposed Fittest Job First Dynamic Round Robin(FJFDRR) algorithm with the Priority Based Static Round Robin(PBSRR) algorithm. Experimental results show that our proposed algorithm performs better than PBSRR in terms of reducing the number of context switches, average waiting time and average turnaround time.

Read more
Operating Systems

Design and Performance Evaluation of an Optimized Disk Scheduling Algorithm (ODSA)

Management of disk scheduling is a very important aspect of operating system. Performance of the disk scheduling completely depends on how efficient is the scheduling algorithm to allocate services to the request in a better manner. Many algorithms (FIFO, SSTF, SCAN, C-SCAN, LOOK, etc.) are developed in the recent years in order to optimize the system disk I/O performance. By reducing the average seek time and transfer time, we can improve the performance of disk I/O operation. In our proposed algorithm, Optimize Disk Scheduling Algorithm (ODSA) is taking less average seek time and transfer time as compare to other disk scheduling algorithms (FIFO, SSTF, SCAN, C-SCAN, LOOK, etc.), which enhances the efficiency of the disk performance in a better manner.

Read more
Operating Systems

Determinating Timing Channels in Compute Clouds

Timing side-channels represent an insidious security challenge for cloud computing, because: (a) massive parallelism in the cloud makes timing channels pervasive and hard to control; (b) timing channels enable one customer to steal information from another without leaving a trail or raising alarms; (c) only the cloud provider can feasibly detect and report such attacks, but the provider's incentives are not to; and (d) resource partitioning schemes for timing channel control undermine statistical sharing efficiency, and, with it, the cloud computing business model. We propose a new approach to timing channel control, using provider-enforced deterministic execution instead of resource partitioning to eliminate timing channels within a shared cloud domain. Provider-enforced determinism prevents execution timing from affecting the results of a compute task, however large or parallel, ensuring that a task's outputs leak no timing information apart from explicit timing inputs and total compute duration. Experiments with a prototype OS for deterministic cloud computing suggest that such an approach may be practical and efficient. The OS supports deterministic versions of familiar APIs such as processes, threads, shared memory, and file systems, and runs coarse-grained parallel tasks as efficiently and scalably as current timing channel-ridden systems.

Read more
Operating Systems

Deterministic Consistency: A Programming Model for Shared Memory Parallelism

The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose "deterministic consistency", a parallel programming model as easy to understand as the "parallel assignment" construct in sequential languages such as Perl and JavaScript, where concurrent threads always read their inputs before writing shared outputs. DC supports common data- and task-parallel synchronization abstractions such as fork/join and barriers, as well as non-hierarchical structures such as producer/consumer pipelines and futures. A preliminary prototype suggests that software-only implementations of DC can run applications written for popular parallel environments such as OpenMP with low (<10%) overhead for some applications.

Read more
Operating Systems

Deterministic Real-time Thread Scheduling

Race condition is a timing sensitive problem. A significant source of timing variation comes from nondeterministic hardware interactions such as cache misses. While data race detectors and model checkers can check races, the enormous state space of complex software makes it difficult to identify all of the races and those residual implementation errors still remain a big challenge. In this paper, we propose deterministic real-time scheduling methods to address scheduling nondeterminism in uniprocessor systems. The main idea is to use timing insensitive deterministic events, e.g, an instruction counter, in conjunction with a real-time clock to schedule threads. By introducing the concept of Worst Case Executable Instructions (WCEI), we guarantee both determinism and real-time performance.

Read more
Operating Systems

Deterministically Deterring Timing Attacks in Deterland

The massive parallelism and resource sharing embodying today's cloud business model not only exacerbate the security challenge of timing channels, but also undermine the viability of defenses based on resource partitioning. We propose hypervisor-enforced timing mitigation to control timing channels in cloud environments. This approach closes "reference clocks" internal to the cloud by imposing a deterministic view of time on guest code, and uses timing mitigators to pace I/O and rate-limit potential information leakage to external observers. Our prototype hypervisor is the first system to mitigate timing-channel leakage across full-scale existing operating systems such as Linux and applications in arbitrary languages. Mitigation incurs a varying performance cost, depending on workload and tunable leakage-limiting parameters, but this cost may be justified for security-critical cloud applications and data.

Read more
Operating Systems

Dim Silicon and the Case for Improved DVFS Policies

Due to thermal and power supply limits, modern Intel CPUs reduce their frequency when AVX2 and AVX-512 instructions are executed. As the CPUs wait for 670{\mu}s before increasing the frequency again, the performance of some heterogeneous workloads is reduced. In this paper, we describe parallels between this situation and dynamic power management as well as between the policy implemented by these CPUs and fixed-timeout device shutdown policies. We show that the policy implemented by Intel CPUs is not optimal and describe potential better policies. In particular, we present a mechanism to classify applications based on their likeliness to cause frequency reduction. Our approach takes either the resulting classification information or information provided by the application and generates hints for the DVFS policy. We show that faster frequency changes based on these hints are able to improve performance for a web server using the OpenSSL library.

Read more
Operating Systems

Disaggregated Accelerator Management System for Cloud Data Centers

A conventional data center that consists of monolithic-servers is confronted with limitations including lack of operational flexibility, low resource utilization, low maintainability, etc. Resource disaggregation is a promising solution to address the above issues. We propose a concept of disaggregated cloud data center architecture called Flow-in-Cloud (FiC) that enables an existing cluster computer system to expand an accelerator pool through a high-speed network. FlowOS-RM manages the entire pool resources, and deploys a user job on a dynamically constructed slice according to a user request. This slice consists of compute nodes and accelerators where each accelerator is attached to the corresponding compute node. This paper demonstrates the feasibility of FiC in a proof of concept experiment running a distributed deep learning application on the prototype system. The result successfully warrants the applicability of the proposed system.

Read more

Ready to get started?

Join us today