Don McNickle
University of Canterbury
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Don McNickle.
ieee region 10 conference | 2005
Don McNickle; Ronald G. Addie
The existing Internet appears to provide good quality service for a very wide range of services, possibly because the TCP protocols aim to achieve fair queueing, or processor sharing. The DiffServ architecture aims to do better than this by providing different performance standards for different classes of service. The natural way to apply DiffServ is to allocate classes in accordance with the urgency or priority of the requests. However, another approach is to use DiffServ to allocate service classes according to the size of the requests, where the concept of size can be defined in a variety of ways: total bytes in a flow, rate of a flow, or by a series of token buckets. We use simple queueing models to investigate how much improvement in performance could be obtained by implementing this service discipline, and whether, as a consequence, it is unnecessary and perhaps even dangerous to assign classes of service in accordance with the type of service requested. The results suggest that shortest job first offers considerable advantages over processor sharing. Thus in spite of the difficulties of identifying the size of flows it may be worthwhile to consider how something like shortest job first can be implemented. On the other hand it appears that other priority queue strategies, not based on job size, are risky, in that the marginal advantages gained by favoured jobs are very small, and the majority of jobs can expect to suffer worse response times.
european conference on modelling and simulation | 2009
Don McNickle; Krzysztof Pawlikowski; Gregory Ewing
This paper describes and summarises our research on enhancing the methodology of automated discrete-event simulation and its implementation in Akaroa2, a controller of such simulation studies. Akaroa2 addresses two major practical issues in the application of stochastic simulation in performance evaluation studies of complex dynamic systems: (i) accuracy of the final results; and (ii) the length of time required to achieve these results. (i) is addressed by running simulations sequentially, with on-line analysis of statistical errors until these reach an acceptably low level. For (ii), Akaroa2 launches multiple copies of a simulation program on networked processors, applying the Multiple Replications in Parallel (MRIP) scenario. In MRIP the processors run independent replications, generating statistically equivalent streams of simulation output data. These data are fed to a global data analyser responsible for analysis of the results and for stopping the simulation. We outline main design issues of Akaroa2, and detail some of the improvements and extensions to this tool over the last 10 years.
iet networks | 2014
Muhammad Asad Arfeen; Krzysztof Pawlikowski; Andreas Willig; Don McNickle
Internet traffic at various tiers of service providers is essentially a superposition or active mixture of traffic from various sources. Statistical properties of this superposition and a resulting phenomenon of scaling are important for network performance (queuing), traffic engineering (routing) and network dimensioning (bandwidth provisioning). In this article, the authors study the process of superposition and scaling jointly in a non-asymptotic framework so as to better understand the point process nature of cumulative input traffic process arriving at telecommunication devices (e.g., switches, routers). The authors further assess the scaling dynamics of the structural components (packets, flows and sessions) of the cumulative input process and their relation with superposition of point processes. Classical and new results are discussed with their applicability in access and core networks. The authors propose that renewal theory-based approximate point process models, that is, Pareto renewal process superposition and Weibull renewal process superposition can model the similar second-order scaling, as observed in traffic data of access and backbone core networks, respectively.
european conference on modelling and simulation | 2009
Adriaan Schmidt; Krzysztof Pawlikowski; Don McNickle
For sequential output data analysis in non-terminating discrete-event simulation, we consider three methods of point and interval estimation of the steady-state variance. We assess their performance in the analysis of the output of queueing simulations by means of experimental coverage analysis. Over a range of models, estimating variances turns out to involve considerably more observations than estimating means. Thus, selecting estimators with good performance characteristics is even more important. INTRODUCTION The output sequence {x} = x1, x2, . . . of a simulation program is usually regarded as the realisation of a stochastic process {X} = X1, X2, . . .. In the case of steady-state simulation, we assume this process to be stationary and ergodic. Current analysis of output data from discrete event simulation focuses almost exclusively on the estimation of mean values. Thus, the literature on “variance estimation” mostly deals with the estimation of the variance of the mean, which is needed to construct a confidence interval of the estimated mean values. In this paper, we are interested in finding point and interval estimates of the steady-state variance σ = Var[Xi] and the variance of the variance, from which we can construct confidence intervals for the variance estimates. Similar to the estimation of mean values, one problem in variance estimation is caused by the fact that output data from steadystate simulation are usually correlated. The variance we estimate is not to be confused with the quantity σ 0 = limn→∞ nVar[X(n)], sometimes referred to as variance parameter (Chen and Sargent, 1990) or steady-state variance constant (Steiger and Wilson, 2001), and which is important in the methods of standardized time series (Schruben, 1983) and various methods using this concept. Applications for the estimators we propose can be found in the performance analysis of communication networks. In audio or video streaming applications, for example, the actual packet delay is less important than the packet delay variation or jitter (see e.g. Tanenbaum, 2003). Other applications include estimation of safety stock or buffer sizes, and statistical process control. Our estimation procedures are designed for sequential estimation. As more observations are generated, the estimates are continually updated, and simulation is stopped upon reaching the required precision. In simulation practice, one observes an initial transient phase of the simulation output due to the initial conditions of the simulation program, which are usually not representative of its long-run behaviour. It is common practice to let the simulation “warm up” before collecting observations for analysis. For many processes, σ converges to its steady-state value slower than the process mean; therefore, existing methods of detection of the initial transient period with regard to the mean value may sometimes not be applicable for variance estimation. A method based on distributions, which includes variance, is described in (Eickhoff et al., 2007). This is, however, not the focus of this paper, so we use a method described in (Pawlikowski, 1990). In the next section we present three different methods of estimating the steady-state variance. We assessed these estimators experimentally in terms of the coverage of confidence intervals. The results of the experiments are presented in Section 3. The final section of the paper summarises our findings and gives an outlook on future research. ESTIMATING THE STEADY-STATE VARIANCE In the case of independent and identically distributed random variables, the well-known consistent estimate of the Proceedings 23rd European Conference on Modelling and Simulation ©ECMS Javier Otamendi, Andrzej Bargiela, Jose Luis Montes, Luis Miguel Doncel Pedrera (Editors) ISBN: 978-0-9553018-8-9 / ISBN: 978-0-9553018-9-6 (CD)
2013 22nd ITC Specialist Seminar on Energy Efficient and Green Networking (SSEEGN) | 2013
Saghar Izadpanah; Krzysztof Pawlikowski; Franco Davoli; Don McNickle
Data-intensive applications that involve large amounts of data generation, processing and transmission, have been operated with little attention to energy efficiency. Issues such as management, movement and storage of huge volumes of data may lead to high energy consumption. Replication is a useful solution to decrease data access time and improve performance in these applications, but it may also lead to increase the energy spent in storage and data transmission, by spreading large volumes of data replicas around the network. Thus, utilizing effective strategies for energy saving in these applications is a very critical issue from both the environmental and economical aspects. In this paper, at first we review the current data replication and caching approaches and energy saving methods in the context of data replication. Then, we propose a model for energy consumption during data replication and, finally, we evaluate two schemes for data fetching based on the two critical metrics in Grid environments: energy consumption and data access time. We also compare the gains based on these metrics with the no-caching scenario by using simulation.
2016 26th International Telecommunication Networks and Applications Conference (ITNAC) | 2016
M. Asad Arfeen; Krys Pawlikowski; Andreas Willig; Don McNickle
The statistical properties of traffic in Internet access networks have long been of interest to networking researchers and practitioners. In this paper, we analyse network traffic originating and terminating from various types of Internet access networks (Ethernet, Digital Subscriber Line, Wireless hotspot and their next tier Internet Service Providers core network) and show that renewal processes having heavy-tail distributed interarrival times (also known as fractal renewal processes) have a great potential in capturing statistical properties of traffic in access networks.
australasian telecommunication networks and applications conference | 2012
Mofassir Ul Haque; Krzysztof Pawlikowski; Don McNickle; Andreas Willig
Simulation is used for developing and testing different scenarios under controlled and reproducible situations. Proper handling of the initial transient effect for steady state simulations, proper selection of simulation length and proper statistical analysis of results are essential for conducting credible simulations. Akaroa2, a universal controller for quantitative discrete event simulation, has been designed to improve credibility of simulation results. It automatically handles the initial transient effect, carries out statistical analysis of results and controls simulation length by stopping the simulation when the required precision is achieved based on continuous analysis of mean values. Simulation programs can be made to run under Akaroa2 to provide statistically valid results. OPNET is a popular simulator used for carrying out telecommunication and networking related simulations, but it does not provide automated simulation length control and initial transient effect handling. We have developed an interface using OPNET co-simulation capabilities to run OPNET simulations under Akaroa2 control. It will allow the user of OPNET to control the length of simulation on basis of the accuracy of statistical results. OPNET and Akaroa2 are available for free for non-profit research at universities.
25th Conference on Modelling and Simulation | 2011
Don McNickle; Krzysztof Pawlikowski; Nelson Shaw
On-line analysis of output data from discrete event stochastic simulation focuses almost entirely on estimation of means. Most “variance estimation” research in simulation refers to the estimation of the variance of the mean, to construct confidence intervals for mean values. There has been little research on the estimation of variance in simulation. We investigate three methods for point and interval estimates of variance and discuss an implementation of the best technique in an extended version of Akaroa2, a quantitative stochastic simulation controller. (1)
Computer Systems: Science & Engineering | 2012
Jongsuk Ruth Lee; Don McNickle; Krzysztof Pawlikowski; Hae-Duck Joshua Jeong
Archive | 2008
Adriaan Schmidt; Krzysztof Pawlikowski; Don McNickle