Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Naresh M. Patel is active.

Publication


Featured researches published by Naresh M. Patel.


European Journal of Operational Research | 2000

Reliability modelling using G-queues

Peter G. Harrison; Naresh M. Patel; Edwige Pitel

Abstract A new technique for modelling unreliable queueing nodes is presented, based on the notion of queues with negative customers – called G-queues. A breakdown at a server is represented by the arrival of a negative customer which causes some customers to be lost. We first model an M/M/1 queue with breakdowns and instantaneous repairs which is relevant in systems such as packet-switched telephone networks where a “failure” is interpreted as the loss of a call. The analysis is then extended to the case of exponential repair times which provides a more conventional model of unreliable systems and a variation on the contemporary notion of G-queues. Specifically, we derive expressions for the Laplace transform of the sojourn time density in a single server queue with exponential service times, independent Poisson arrival streams of positive customers and negative customers with batch killing, and both instantaneous and exponential repair times. We apply our model to approximate the performance of a bank of parallel, unreliable servers by modifying the arrival process to a Markov modulated Poisson process and considering an approximate decomposition of the underlying Markov chain under the assumption that arrivals and service occur much faster than breakdowns and repairs. Numerical validation with respect to simulation suggests that the approximation is accurate, especially for non-heavily utilised servers. Finally we indicate how to extend our approach to arbitrarily connected Markovian queueing networks with breakdowns.


Performance Evaluation | 2012

Storage workload modelling by hidden Markov models: Application to Flash memory

Peter G. Harrison; S. K. Harrison; Naresh M. Patel; Soraya Zertal

A workload analysis technique is presented that processes data from operation type traces and creates a hidden Markov model (HMM) to represent the workload that generated those traces. The HMM can be used to create representative traces for performance models, such as simulators, avoiding the need to repeatedly acquire suitable traces. It can also be used to estimate the transition probabilities and rates of a Markov modulated arrival process directly, for use as input to an analytical performance model of Flash memory. The HMMs obtained from industrial workloads-both synthetic benchmarks, preprocessed by a file translation layer, and real, time-stamped user traces-are validated by comparing their autocorrelation functions and other statistics with those of the corresponding monitored time series. Further, the performance model applications, referred to above, are illustrated by numerical examples.


Performance Evaluation | 2010

Response time distribution of flash memory accesses

Peter G. Harrison; Naresh M. Patel; Soraya Zertal

Flash memory is becoming an increasingly important storage component among non-volatile storage devices. Its cost is decreasing dramatically and its performance continues to improve, which makes it a serious competitor for disks and a candidate for enterprise-tier storage devices of the future. Consequently, it is important to devise models and tools to analyse its behaviour and to evaluate its effects on a systems performance. We propose a Markov modulated fluid model with priority classes to investigate the response time characteristics of Flash memory accesses. This model can represent well the Flash access operation types, respecting the erase/write/read relative priorities and autocorrelations. We apply the model to estimate response time densities at the chip for an OLTP-type of workload and indicate the magnitude of the penalty suffered by writes under priority scheduling of read operations. The model is validated against a customised hardware simulator that uses input-traces typical of our Markovian workload description.


Archive | 1996

Negative Customers Model Queues with Breakdowns

Peter G. Harrison; Edwige Pitel; Naresh M. Patel

Negative customers in queueing networks are used to model breakdowns at a server which cause some customers to be lost. We consider an unreliable M/M/l queue with both instantaneous and exponential repairs, and derive expressions for the Laplace transform of the sojourn time density. We apply the model to approximate the performance of a bank of parallel, unreliable servers by modifying the arrival process to a Markov Modulated Poisson Process and considering an approximate decomposition of the underlying Markov chain under the assumption that arrivals and service occur much faster than breakdowns and repairs. Validation is by simulation.


ACM Transactions on Modeling and Performance Evaluation of Computing | 2016

Energy--Performance Trade-Offs via the EP Queue

Peter G. Harrison; Naresh M. Patel; William J. Knottenbelt

We introduce the EP queue -- a significant generalization of the MB/G/1 queue that has state-dependent service time probability distributions and incorporates power-up for first arrivals and power-down for idle periods. We derive exact results for the busy-time and response-time distributions. From these, we derive power consumption metrics during nonidle periods and overall response time metrics, which together provide a single measure of the trade-off between energy and performance. We illustrate these trade-offs for some policies and show how numerical results can provide insights into system behavior. The EP queue has application to storage systems, especially hard disks, and other data-center components such as compute servers, networking, and even hyperconverged infrastructure.


measurement and modeling of computer systems | 2015

Half-Latency Rule for Finding the Knee of the Latency Curve

Naresh M. Patel

Latency curves for computer systems typically rise rapidly beyond some threshold utilization and many mathematical methods have been suggested to find this knee, but none seem to match common practice. This paper proposes a trade-off metric called ATP (an alternative to Kleinrocks power metric) which generates a half-latency rule for calculating the location of the knee for a latency curve. Exact analysis with this approach applied to the simplest singleserver queue results in an optimal server utilization of 71.5%, which is close to the 70% utilization used in practice. The half-latency rule also applies to practical situations that generate a discrete set of throughput and latency measurements. The discrete use cases include both production systems (for provisioning new work) or lab systems (for summarizing the entire latency curve into a single figure of merit for each workload and system configuration).


measurement and modeling of computer systems | 2012

Performance implications of flash and storage class memories

Naresh M. Patel

The storage industry has seen incredible growth in data storage needs by both consumers and enterprises. Long-term technology trends mean that the data deluge will continue well into the future. These trends include the big-data trend (driven by data mining analytics, high-bandwidth needs, and large content repositories), server virtualization, cloud storage, and Flash. We will cover how Flash and storage class memories (SCM) interact with some of these major trends from a performance perspective.


international conference on performance engineering | 2018

Optimizing Energy-Performance Trade-Offs in Solar-Powered Edge Devices

Peter G. Harrison; Naresh M. Patel

Power modes can be used to save energy in electronic devices but a low power level typically degrades performance. This trade-off is addressed in the so-called EP-queue model, which is a queue depth dependent M/GI/1 queue augmented with power-down and power-up phases of operation. The ability to change service times by power settings allows us to leverage a Markov Decision Process (MDP), which approach we illustrate using a simple fully solar-powered case study with finite states representing levels of battery charge and solar intensity.


ACM Transactions on Storage | 2018

Empirical Evaluation and Enhancement of Enterprise Storage System Request Scheduling

Deng Zhou; Vania Fang; Tao Xie; Wen Pan; Ram Kesavan; Tony Lin; Naresh M. Patel

Since little has been reported in the literature concerning enterprise storage system file-level request scheduling, we do not have enough knowledge about how various scheduling factors affect performance. Moreover, we are in lack of a good understanding on how to enhance request scheduling to adapt to the changing characteristics of workloads and hardware resources. To answer these questions, we first build a request scheduler prototype based on WAFL®, a mainstream file system running on numerous enterprise storage systems worldwide. Next, we use the prototype to quantitatively measure the impact of various scheduling configurations on performance on a NetApp®s enterprise-class storage system. Several observations have been made. For example, we discover that in order to improve performance, the priority of write requests and non-preempted restarted requests should be boosted in some workloads. Inspired by these observations, we further propose two scheduling enhancement heuristics called SORD (size-oriented request dispatching) and QATS (queue-depth aware time slicing). Finally, we evaluate them by conducting a wide range of experiments using workloads generated by SPC-1 and SFS2014 on both HDD-based and all-flash platforms. Experimental results show that the combination of the two can noticeably reduce average request latency under some workloads.


Archive | 1993

Performance Modelling of Communication Networks and Computer Architectures

Peter G. Harrison; Naresh M. Patel

Collaboration


Dive into the Naresh M. Patel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge