Featured Researches

Operating Systems

An Operating System Level Data Migration Scheme in Hybrid DRAM-NVM Memory Architecture

With the emergence of Non-Volatile Memories (NVMs) and their shortcomings such as limited endurance and high power consumption in write requests, several studies have suggested hybrid memory architecture employing both Dynamic Random Access Memory (DRAM) and NVM in a memory system. By conducting a comprehensive experiments, we have observed that such studies lack to consider very important aspects of hybrid memories including the effect of: a) data migrations on performance, b) data migrations on power, and c) the granularity of data migration. This paper presents an efficient data migration scheme at the Operating System level in a hybrid DRAMNVM memory architecture. In the proposed scheme, two Least Recently Used (LRU) queues, one for DRAM section and one for NVM section, are used for the sake of data migration. With careful characterization of the workloads obtained from PARSEC benchmark suite, the proposed scheme prevents unnecessary migrations and only allows migrations which benefits the system in terms of power and performance. The experimental results show that the proposed scheme can reduce the power consumption up to 79% compared to DRAM-only memory and up to 48% compared to the state-of-the art techniques.

Read more
Operating Systems

An Optimal Real-Time Scheduling Approach: From Multiprocessor to Uniprocessor

An optimal solution to the problem of scheduling real-time tasks on a set of identical processors is derived. The described approach is based on solving an equivalent uniprocessor real-time scheduling problem. Although there are other scheduling algorithms that achieve optimality, they usually impose prohibitive preemption costs. Unlike these algorithms, it is observed through simulation that the proposed approach produces no more than three preemptions points per job.

Read more
Operating Systems

An Optimized Disk Scheduling Algorithm With Bad-Sector Management

In high performance computing, researchers try to optimize the CPU Scheduling algorithms, for faster and efficient working of computers. But a process needs both CPU bound and I/O bound for completion of its execution. With modernization of computers the speed of processor, hard-disk, and I/O devices increases gradually. Still the data access speed of hard-disk is much less than the speed of the processor. So when processor receives a data from secondary memory it executes immediately and again it have to wait for receiving another data. So the slowness of the hard-disk becomes a bottleneck in the performance of processor. Researchers try to develop and optimize the traditional disk scheduling algorithms for faster data transfer to and from secondary data storage devices. In this paper we try to evolve an optimized scheduling algorithm by reducing the seek time, the rotational latency, and the data transfer time in runtime. This algorithm has the feature to manage the bad-sectors of the hard-disk. It also attempts to reduce power consumption and heat reduction by minimizing bad sector reading time.

Read more
Operating Systems

An Optimum Multilevel Dynamic Round Robin Scheduling Algorithm

The main objective of this paper is to improve the Round Robin scheduling algorithm using the dynamic time slice concept. CPU scheduling becomes very important in accomplishing the operating system (OS) design goals. The intention should be allowed as many as possible running processes at all time in order to make best use of CPU. CPU scheduling has strong effect on resource utilization as well as overall performance of the system. Round Robin algorithm performs optimally in time-shared systems, but it is not suitable for soft real time systems, because it gives more number of context switches, larger waiting time and larger response time. In this paper, a new CPU scheduling algorithm called An Optimum Multilevel Dynamic Round Robin Scheduling Algorithm is proposed, which calculates intelligent time slice and changes after every round of execution. The suggested algorithm was evaluated on some CPU scheduling objectives and it was observed that this algorithm gave good performance as compared to the other existing CPU scheduling algorithms.

Read more
Operating Systems

An optimized round robin cpu scheduling algorithm with dynamic time quantum

CPU scheduling is one of the most crucial operations performed by operating system. Different algorithms are available for CPU scheduling amongst them RR (Round Robin) is considered as optimal in time shared environment. The effectiveness of Round Robin completely depends on the choice of time quantum. In this paper a new CPU scheduling algorithm has been proposed, named as DABRR (Dynamic Average Burst Round Robin). That uses dynamic time quantum instead of static time quantum used in RR. The performance of the proposed algorithm is experimentally compared with traditional RR and some existing variants of RR. The results of our approach presented in this paper demonstrate improved performance in terms of average waiting time, average turnaround time, and context switching.

Read more
Operating Systems

Analyzing IO Amplification in Linux File Systems

We present the first systematic analysis of read, write, and space amplification in Linux file systems. While many researchers are tackling write amplification in key-value stores, IO amplification in file systems has been largely unexplored. We analyze data and metadata operations on five widely-used Linux file systems: ext2, ext4, XFS, btrfs, and F2FS. We find that data operations result in significant write amplification (2-32X) and that metadata operations have a large IO cost. For example, a single rename requires 648 KB write IO in btrfs. We also find that small random reads result in read amplification of 2-13X. Based on these observations, we present the CReWS conjecture about the relationship between IO amplification, consistency, and storage space utilization. We hope this paper spurs people to design future file systems with less IO amplification, especially for non-volatile memory technologies.

Read more
Operating Systems

AppStreamer: Reducing Storage Requirements of Mobile Games through Predictive Streaming

Storage has become a constrained resource on smartphones. Gaming is a popular activity on mobile devices and the explosive growth in the number of games coupled with their growing size contributes to the storage crunch. Even where storage is plentiful, it takes a long time to download and install a heavy app before it can be launched. This paper presents AppStreamer, a novel technique for reducing the storage requirements or startup delay of mobile games, and heavy mobile apps in general. AppStreamer is based on the intuition that most apps do not need the entirety of its files (images, audio and video clips, etc.) at any one time. AppStreamer can, therefore, keep only a small part of the files on the device, akin to a "cache", and download the remainder from a cloud storage server or a nearby edge server when it predicts that the app will need them in the near future. AppStreamer continuously predicts file blocks for the near future as the user uses the app, and fetches them from the storage server before the user sees a stall due to missing resources. We implement AppStreamer at the Android file system layer. This ensures that the apps require no source code or modification, and the approach generalizes across apps. We evaluate AppStreamer using two popular games: Dead Effect 2, a 3D first-person shooter, and Fire Emblem Heroes, a 2D turn-based strategy role-playing game. Through a user study, 75% and 87% of the users respectively find that AppStreamer provides the same quality of user experience as the baseline where all files are stored on the device. AppStreamer cuts down the storage requirement by 87% for Dead Effect 2 and 86% for Fire Emblem Heroes.

Read more
Operating Systems

Application of Global and One-Dimensional Local Optimization to Operating System Scheduler Tuning

This paper describes a study of comparison of global and one-dimensional local optimization methods to operating system scheduler tuning. The operating system scheduler we use is the Linux 2.6.23 Completely Fair Scheduler (CFS) running in simulator (LinSched). We have ported the Hackbench scheduler benchmark to this simulator and use this as the workload. The global optimization approach we use is Particle Swarm Optimization (PSO). We make use of Response Surface Methodology (RSM) to specify optimal parameters for our PSO implementation. The one-dimensional local optimization approach we use is the Golden Section method. In order to use this approach, we convert the scheduler tuning problem from one involving setting of three parameters to one involving the manipulation of one parameter. Our results show that the global optimization approach yields better response but the one- dimensional optimization approach converges to a solution faster than the global optimization approach.

Read more
Operating Systems

Assessment of Response Time for New Multi Level Feedback Queue Scheduler

Response time is one of the characteristics of scheduler, happens to be a prominent attribute of any CPU scheduling algorithm. The proposed New Multi Level Feedback Queue [NMLFQ] Scheduler is compared with dynamic, real time, Dependent Activity Scheduling Algorithm (DASA) and Lockes Best Effort Scheduling Algorithm (LBESA). We abbreviated beneficial result of NMLFQ scheduler in comparison with dynamic best effort schedulers with respect to response time.

Read more
Operating Systems

Augmenting Operating Systems With the GPU

The most popular heterogeneous many-core platform, the CPU+GPU combination, has received relatively little attention in operating systems research. This platform is already widely deployed: GPUs can be found, in some form, in most desktop and laptop PCs. Used for more than just graphics processing, modern GPUs have proved themselves versatile enough to be adapted to other applications as well. Though GPUs have strengths that can be exploited in systems software, this remains a largely untapped resource. We argue that augmenting the OS kernel with GPU computing power opens the door to a number of new opportunities. GPUs can be used to speed up some kernel functions, make other scale better, and make it feasible to bring some computation-heavy functionality into the kernel. We present our framework for using the GPU as a co-processor from an OS kernel, and demonstrate a prototype in Linux.

Read more

Ready to get started?

Join us today