Sarah Tasneem
Eastern Connecticut State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sarah Tasneem.
international conference on electrical and control engineering | 2010
Sarah Tasneem; Feng Zhang; Lester Lipsky; Steve Thompson
Scheduling has a great influence in improving computer system performance. Many recent system designs use policies which give priority to short jobs as it is often the case that the majority of the jobs are short. SRPT (Shortest Remaining Processing Time First), which servers the jobs that needs the least amount of service to complete, is known to produce optimum mean response time. However SRPT requires having the knowledge of the individual service time exactly in advance, which is not practical. FCFS and PS which shares the system capacity equally among all the jobs can be used if the CPU time requirement of each job is not available beforehand. However, FCFS is detrimental even when there is a moderate variability in job service times. In practice, however, PS must be implemented by RR with time-slicing, which incurs non-negligible job switching overhead for small time-slices. Over time, in addition to PS strategies, for example, LCFSPR (Last Come First Serve with Pre-emptive Resume), LAT (Least-Attained-Time), SRT (Shortest Residual-Time), etc have been presented. A common feature of these strategies is that preemption is employed to favour possibly short jobs. Among these strategies, some are simple to use and incur less overhead, while others incur more overhead. For example, RR and LCFSPR only require insertion of jobs at the front/back of queue (with complexity O(1)), whereas LAT and SRT need to maintain a sorted queue (with complexity O(log(n))), and keep track of the attained times and residual times of individual jobs. LCFSPR also yields M/M/1 result, but it leads to situations where short jobs can get stuck behind long jobs. In this paper, we mainly consider the issue of handling newly arrived jobs in implementing RR strategy. A research issue is then how time-slicing performs if large time-slices have to be used. In this paper, we investigate several RR variants through Discrete Event Simulation. Our results show that, by favouring newly arrived jobs, the performance of RR with large time-slices could be improved than that of ideal PS. The simple immediate preemption scheme, which serves the new jobs immediately by preempting the current active job, is shown to further improve the performance of RR.
international symposium on computers and communications | 2004
Sarah Tasneem; Reda A. Ammar; Howard A. Sholl
Scheduling can require analyzing not only the total computation time of a task, but also the remaining execution time, R(t)/sub /spl Delta/t/, after accumulated time /spl Delta/t. Often a software programs execution time is characterized by a single value (mean). When scheduling is based on partial execution (a common scenario in multimedia systems) a more accurate estimation of remaining time (R(t)/sub /spl Delta/t/) is desired than can be obtained from just the initial mean value, in order to have effective scheduling decisions. The remaining time approach can provide more accurate estimation, and therefore more effective scheduling, in time-sensitive situations. We developed an analytical model for computing expected remaining execution time, (R(t)/sub /spl Delta/t/)~ , of software programs from their execution time and probability distributions. To implement the equations, we further designed an algorithm that computes (R(t)/sub /spl Delta/t/)~ for operating system scheduling applications. We proved that the real time execution complexity of the algorithm is O(1) and is, therefore, independent of the size of the distribution. Our method of more accurate estimate of (R(t)/sub /spl Delta/t/)~ implies expect better scheduling performance in applications where remaining execution time is used, especially in CPU scheduling.
Journal of Software Engineering and Applications | 2010
Sarah Tasneem; Lester Lipsky; Reda A. Ammar; Howard A. Sholl
It is well known, in queueing theory, that the system performance is greatly influenced by scheduling policy. No universal optimum scheduling strategy exists in systems where individual customer service demands are not known a priori. However, if the distribution of job times is known, then the residual time (expected time remaining for a job), based on the service it has already received, can be calculated. Our particular research contribution is in exploring the use of this function to enhance system performance by increasing the probability that a job will meet its deadline. In a detailed discrete event simulation, we have tested many different distributions with a wide range of C2 and shapes, as well as for single and dual processor system. Results of four distributions are reported here. We compare with RR and FCFS, and find that in all distributions studied our algorithm performs best. In the study of the use of two slow servers versus one fast server, we have discovered that they provide comparable performance, and in a few cases the double server system does better.
network computing and applications | 2009
Steve Thompson; Lester Lipsky; Sarah Tasneem; Feng Zhang
It has been observed in recent years that in many applications service time demands are highly variable. Without foreknowledge of exact service times of individual jobs, processor sharing is an effective theoretical strategy for handling such demands. In practice, however, processor sharing must be implemented by time-slicing with a round-robin discipline. In this paper, we investigate how round-robin performs with the consideration of job switching overhead. Because of recent results, we assume that the best strategy is for new jobs to preempt the one in service. By analyzing time-slicing with overhead, we derive the effective utilization parameter, and give a good approximation regarding the lower bound of time-slice under a given system load and overhead. The simulation results show that for both exponential and non-exponential distributions, the system blowup points agree with what the effective utilization parameter tells us. Furthermore, with the consideration of overhead, an optimum time-slice value exists for a particular environment.
Archive | 2009
H.-yu. Tu; Sarah Tasneem
Most of modern microprocessors employ on-chip cache memories to meet the memory bandwidth demand. These caches are now occupying a greater real es- tate of chip area. Also, continuous down scaling of transistors increases the possi- bility of defects in the cache area which already starts to occupies more than 50% of chip area. For this reason, various techniques have been proposed to tolerate defects in cache blocks. These techniques can be classified into three different cat- egories, namely, cache line disabling, replacement with spare block, and decoder reconfiguration without spare blocks. This chapter examines each of those fault tol- erant techniques with a fixed typical size and organization of L1 cache, through extended simulation using SPEC2000 benchmark on individual techniques. The de- sign and characteristics of each technique are summarized with a view to evaluate the scheme. We then present our simulation results and comparative study of the three different methods.
network computing and applications | 2005
Sarah Tasneem; Lester Lipsky; Reda A. Ammar; Howard A. Sholl
In systems where job service demands are only known probabilistically, there is very little to distinguish between jobs. Therefore, no universal optimum scheduling strategy or algorithm exists. If the distribution of job times is known, then the residual time (expected time remaining for a job), based on the service it has already received, can be calculated. In a detailed discrete event simulation, we have explored the use of this function for increasing the probability that a job will meet its deadline. We have tested many different distributions with a wide range of sigma2 and shape, four of which are reported here. We compare with RR and FCFS, and find that in all distributions studied our algorithm performs best. We also studied the use of two slow servers versus one fast server, and have found that they provide comparable performance, and in a few cases the double server system does better
Journal of Computing Sciences in Colleges | 2012
Sarah Tasneem
spring simulation multiconference | 2009
Feng Zhang; Sarah Tasneem; Lester Lipsky; Steve Thompson
International Journal of Computers and Their Applications | 2010
Sarah Tasneem; Reda A. Ammar; Lester Lipsky; Howard A. Sholl
computer applications in industry and engineering | 2005
Sarah Tasneem; Howard A. Sholl; Reda A. Ammar