Anthony Ventresque
University College Dublin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anthony Ventresque.
distributed simulation and real-time applications | 2012
Anthony Ventresque; Quentin Bragard; Elvis S. Liu; Liam Murphy; Georgios K. Theodoropoulos; Qi Liu
Traffic simulation can be very computationally intensive, especially for microscopic simulations of large urban areas (tens of thousands of road segments, hundreds of thousands of agents) and when real-time or better than real-time simulation is required. For instance, running a couple of what-if scenarios for road management authorities/police during a road incident: time is a hard constraint and the size of the simulation is relatively high. Hence the need for distributed simulations and for optimal space partitioning algorithms, ensuring an even distribution of the load and minimal communication between computing nodes. In this paper we describe a distributed version of SUMO, a simulator of urban mobility, and SParTSim, a space partitioning algorithm guided by road network for distributed simulations. It outperforms classical uniform space partitioning in terms of road segment cuts and load-balancing.
international symposium on software testing and analysis | 2016
Henry Coles; Thomas Laurent; Christopher Henard; Mike Papadakis; Anthony Ventresque
Mutation testing introduces artificial defects to measure the adequacy of testing. In case candidate tests can distinguish the behaviour of mutants from that of the original program, they are considered of good quality -- otherwise developers need to design new tests. While, this method has been shown to be effective, industry-scale code challenges its applicability due to the sheer number of mutants and test executions it requires. In this paper we present PIT, a practical mutation testing tool for Java, applicable on real-world codebases. PIT is fast since it operates on bytecode and optimises mutant executions. It is also robust and well integrated with development tools, as it can be invoked through a command line interface, Ant or Maven. PIT is also open source and hence, publicly available at \url{http://pitest.org/}
international conference on software testing verification and validation | 2017
Thomas Laurent; Mike Papadakis; Marinos Kintis; Christopher Henard; Yves Le Traon; Anthony Ventresque
Mutation testing is extensively used in software testing studies. However, popular mutation testing tools use a restrictive set of mutants which does not conform to the community standards and mutation testing literature. This can be problematic since the effectiveness of mutation strongly depends on the used mutants. To investigate this issue we form an extended set of mutants and implement it on a popular mutation testing tool named PIT. We then show that in real-world projects the original mutants of PIT are easier to kill and lead to tests that score statistically lower than those of the extended set of mutants for a range of 35% to 70% of the studied classes. These results raise serious concerns regarding the validity of mutation-based experiments that use PIT. To further show the strengths of the extended mutants we also performed an analysis using a benchmark with mutation-adequate test cases and identified equivalent mutants. Our results confirmed that the extended mutants are more effective than a) the original version of PIT and b) two other popular mutation testing tools (major and muJava). In particular, our results demonstrate that the extended mutants are more effective by 23%, 12% and 7% than the mutants of the original PIT, major and muJava. They also show that the extended mutants are at least as strong as the mutants of all the other three tools together. To support future research, we make the new version of PIT, which is equipped with the extended mutants, publicly available.
international conference on high performance computing and simulation | 2015
Xi Li; Anthony Ventresque; Jesus Omana Iglesias; John Murphy
Server consolidation is the most common and effective method to save energy and increase resource utilization in data centers, and virtual machine (VM) placement is the usual way of achieving server consolidation. VM placement is however challenging given the scale of IT infrastructures nowadays and the risk of resource contention among co-located VMs after consolidation. Therefore, the correlation among VMs to be co-located need to be considered. However, existing solutions do not address the scalability issue that arises once the number of VMs increases to an order of magnitude that makes it unrealistic to calculate the correlation between each pair of VMs. In this paper, we propose a correlation-aware VM consolidation solution ScalCCon1, which uses a novel two-phase clustering scheme to address the aforementioned scalability problem. We propose and demonstrate the benefits of using the two-phase clustering scheme in comparison to solutions using one-phase clustering (up to 84% reduction of execution time when 17, 446 VMs are considered). Moreover, our solution manages to reduce the number of physical machines (PMs) required, as well as the number of performance violations, compared to existing correlation-based approaches.
Hybrid Metaheuristics: 9th International Workshop, HM 2014 | 2014
Takfarinas Saber; Anthony Ventresque; Xavier Gandibleux; Liam Murphy
Data centres are facilities with large amount of machines (i.e., servers) and hosted processes (e.g., virtual machines). Managers of data centres (e.g., operators, capital allocators, CRM) constantly try to optimise them, reassigning ‘better’ machines to processes. These managers usually see better/good placements as a combination of distinct objectives, hence why in this paper we define the data centre optimisation problem as a multi-objective machine reassignment problem. While classical solutions to address this either do not find many solutions (e.g., GRASP), do not cover well the search space (e.g., PLS), or even cannot operate properly (e.g., NSGA-II lacks a good initial population), we propose GeNePi, a novel hybrid algorithm. We show that GeNePi outperforms all the other algorithms in terms of quantity of solutions (nearly 6 times more solutions on average than the second best algorithm) and quality (hypervolume of the Pareto frontier is 106% better on average).
Future Generation Computer Systems | 2018
Takfarinas Saber; James Thorburn; Liam Murphy; Anthony Ventresque
Abstract Optimising the data centres of large IT organisations is complex as (i) they are composed of various hosting departments with their own preferences and (ii) reassignment solutions can be evaluated from various independent dimensions. But in reality, the problem is even more challenging as companies can now choose from a pool of cloud services to host some of their workloads. This hybrid search space seems intractable, as each workload placement decision (seen as running in a virtual machine on a server) is required to answer many questions: can we host it internally? In which hosting department? Are the capital allocators of this hosting department ok with this placement? How much does it save us and is it safe? Is there a better option in the Cloud? Etc. In this paper, we define the multi-objective VM reassignment problem for hybrid and decentralised data centres. We also propose H2–D2, a solution that uses a multi-layer architecture and a metaheuristic algorithm to suggest reassignment solutions that are evaluated by the various hosting departments (according to their preferences). We compare H2–D2 against state-of-the-art multi-objective algorithms and find that H2–D2 outperforms them both in terms of quantity (approx 30% more than the second-best algorithm on average) and quality of solutions (19% better than the second-best on average).
International Journal of Parallel Programming | 2016
Xi Li; Anthony Ventresque; John Murphy; James Thorburn
Server sprawl is a problem faced by data centers, which causes unnecessary waste of hardware resources, collateral costs of space, power and cooling systems, and administration. This is usually combated by virtualization based consolidation, and both industry and academia have put many efforts into solving the underlying virtual machine (VM) placement problem. However, IT managers’ preferences are seldom considered when making VM placement decisions. This paper proposes a satisfaction-oriented VM consolidation mechanism (SOC) to plan VM consolidation while taking IT managers’ preferences into consideration. In the mechanism, we propose: (1) an XML-based description language to express managers’ preferences and metrics to evaluate the satisfaction degree; (2) to apply matchmaking to locate entities [i.e., VMs and physical machines (PMs)] that best match each other’s preferences; (3) to employ the VM placement algorithm proposed in our previous work to minimize the number of hosts required and the resource wastage on allocated hosts. SOC is compared with two baselines: placement-only and matchmaking-only. The simulation results show that most of the VM-to-PM mappings output from placement-only violate given preferences, while SOC has a satisfaction degree close to matchmaking-only, without requiring too many PMs as matchmaking-only does, but only an amount close to placement-only. In brief, SOC is effective in minimizing the number of hosts required to support a certain set of VMs, while maximizing the satisfaction degree of both managers from the provider and requester side.
ieee acm international conference utility and cloud computing | 2015
Takfarinas Saber; Anthony Ventresque; Ivona Brandic; James Thorburn; Liam Murphy
Optimising the IT infrastructure of large, often geographically distributed, organisations goes beyond the classical virtual machine reassignment problem, for two reasons: (i) the data centres of these organisations are composed of a number of hosting departments which have different preferences on what to host and where to host it, (ii) the top-level managers in these data centres make complex decisions and need to manipulate possible solutions favouring different objectives to find the right balance. This challenge has not yet been comprehensively addressed in the literature and in this paper we demonstrate that a multi-objective VM reassignment is feasible for large decentralised data centres. We show on a realistic data set that our solution outperforms other classical multi-objective algorithms for VM reassignment in terms of quantity of solutions (by about 15% on average) and quality of the solutions set (by over 6% on average).
international symposium on parallel and distributed computing | 2014
Xi Li; Anthony Ventresque; John Murphy; James Thorburn
Data center optimization, mainly through virtual machine (VM) placement, has received considerable attention in the past years. A lot of heuristics have been proposed to give quick and reasonably good solutions to this problem. However it is difficult to compare them as they use different datasets, while the distribution of resources in the datasets has a big impact on the results. In this paper we propose the first benchmark for VM placement heuristics and we define a novel heuristic. Our benchmark is inspired from a real data center and explores different possible demographics of data centers, which makes it suitable when comparing the behaviour of heuristics. Our new algorithm, RBP, outperforms the state-of-the-art heuristics and provides close to optimal results quickly.
winter simulation conference | 2014
Quentin Bragard; Anthony Ventresque; Liam Murphy
Distributed simulations require partitioning mechanisms to operate, and the best partitioning algorithms try to load-balance the partitions. Dynamic load-balancing, i.e. re-partitioning simulation environments at run-time, becomes essential when the load in the partitions change. In decentralised distributed simulation the information needed to dynamically load-balance seems difficult to collect and to our knowledge, all solutions apply a local dynamic load balancing: partitions exchange load only with their neighbours (more loaded partitions to less loaded ones). This limits the effect of the load-balancing. In this paper, we present a global dynamic load-balancing of decentralised distributed simulations. Our algorithm collects information in a decentralised fashion and makes re-balancing decisions based on the load processed by every logical processes. While our algorithm has similar results to others in most cases, we show an improvement of the load-balancing up to 30% in some challenging scenarios against only 12.5% for a local dynamic load-balancing.