Carlos García Garino
National University of Cuyo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carlos García Garino.
Computers & Electrical Engineering | 2012
Carlos Catania; Carlos García Garino
Automatic network intrusion detection has been an important research topic for the last 20years. In that time, approaches based on signatures describing intrusive behavior have become the de-facto industry standard. Alternatively, other novel techniques have been used for improving automation of the intrusion detection process. In this regard, statistical methods, machine learning and data mining techniques have been proposed arguing higher automation capabilities than signature-based approaches. However, the majority of these novel techniques have never been deployed on real-life scenarios. The fact is that signature-based still is the most widely used strategy for automatic intrusion detection. In the present article we survey the most relevant works in the field of automatic network intrusion detection. In contrast to previous surveys, our analysis considers several features required for truly deploying each one of the reviewed approaches. This wider perspective can help us to identify the possible causes behind the lack of acceptance of novel techniques by network security experts.
Expert Systems With Applications | 2012
Carlos Catania; Facundo Bromberg; Carlos García Garino
In the past years, several support vector machines (SVM) novelty detection approaches have been applied on the network intrusion detection field. The main advantage of these approaches is that they can characterize normal traffic even when trained with datasets containing not only normal traffic but also a number of attacks. Unfortunately, these algorithms seem to be accurate only when the normal traffic vastly outnumbers the number of attacks present in the dataset. A situation which can not be always hold. This work presents an approach for autonomous labeling of normal traffic as a way of dealing with situations where class distribution does not present the imbalance required for SVM algorithms. In this case, the autonomous labeling process is made by SNORT, a misuse-based intrusion detection system. Experiments conducted on the 1998 DARPA dataset show that the use of the proposed autonomous labeling approach not only outperforms existing SVM alternatives but also, under some attack distributions, obtains improvements over SNORT itself.
Computers & Electrical Engineering | 2014
Elina Pacini; Cristian Mateos; Carlos García Garino
Scientists and engineers need computational power to satisfy the increasing resource intensive nature of their simulations. For example, running Parameter Sweep Experiments (PSE) involve processing many independent jobs, given by multiple initial configurations (input parameter values) against the same program code. Hence, paradigms like Grid Computing and Cloud Computing are employed for gaining scalability. However, job scheduling in Grid and Cloud environments represents a difficult issue since it is basically NP-complete. Thus, many variants based on approximation techniques, specially those from Swarm Intelligence (SI), have been proposed. These techniques have the ability of searching for problem solutions in a very efficient way. This paper surveys SI-based job scheduling algorithms for bag-of-tasks applications (such as PSEs) on distributed computing environments, and uniformly compares them based on a derived comparison framework. We also discuss open problems and future research in the area.
Advances in Engineering Software | 2013
Cristian Mateos; Elina Pacini; Carlos García Garino
Parameter Sweep Experiments (PSEs) allow scientists and engineers to conduct experiments by running the same program code against different input data. This usually results in many jobs with high computational requirements. Thus, distributed environments, particularly Clouds, can be employed to fulfill these demands. However, job scheduling is challenging as it is an NP-complete problem. Recently, Cloud schedulers based on bio-inspired techniques - which work well in approximating problems with little input information - have been proposed. Unfortunately, existing proposals ignore job priorities, which is a very important aspect in PSEs since it allows accelerating PSE results processing and visualization in scientific Clouds. We present a new Cloud scheduler based on Ant Colony Optimization, the most popular bio-inspired technique, which also exploits well-known notions from operating systems theory. Simulated experiments performed with real PSE job data and other Cloud scheduling policies indicate that our proposal allows for a more agile job handling while reducing PSE completion time.
Advances in Engineering Software | 2015
Elina Pacini; Cristian Mateos; Carlos García Garino
The Cloud Computing paradigm focuses on the provisioning of reliable and scalable infrastructures (Clouds) delivering execution and storage services. The paradigm, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. The goal of this work is to study private Clouds to execute scientific experiments coming from multiple users, i.e., our work focuses on the Infrastructure as a Service (IaaS) model where custom Virtual Machines (VM) are launched in appropriate hosts available in a Cloud. Then, correctly scheduling Cloud hosts is very important and it is necessary to develop efficient scheduling strategies to appropriately allocate VMs to physical resources. The job scheduling problem is however NP-complete, and therefore many heuristics have been developed. In this work, we describe and evaluate a Cloud scheduler based on Ant Colony Optimization (ACO). The main performance metrics to study are the number of serviced users by the Cloud and the total number of created VMs in online (non-batch) scheduling scenarios. Besides, the number of intra-Cloud network messages sent are evaluated. Simulated experiments performed using CloudSim and job data from real scientific problems show that our scheduler succeeds in balancing the studied metrics compared to schedulers based on Random assignment and Genetic Algorithms.
advances in new technologies, interactive interfaces, and communicability | 2011
Elina Pacini; Melisa Ribero; Cristian Mateos; Anibal Mirasso; Carlos García Garino
Scientists and engineers are more and more faced to the need of computational power to satisfy the ever-increasing resource intensive nature of their experiments. Traditionally, they have relied on conventional computing infrastructures such as clusters and Grids. A recent computing paradigm that is gaining momentum is Cloud Computing, which offers a simpler administration mechanism compared to those conventional infrastructures. However, there is a lack of studies in the literature about the viability of using Cloud Computing to execute scientific and engineering applications from a performance standpoint. We present an empirical study on the employment of Cloud infrastructures to run parameter sweep experiments (PSEs), particularly studies of viscoplastic solids together with simulations by using the CloudSim toolkit. In general, we obtained very good speedups, which suggest that disciplinary users could benefit from Cloud Computing for executing resource intensive PSEs.
ieee biennial congress of argentina | 2016
Pablo Torres; Carlos Catania; Sebastian Garcia; Carlos García Garino
A Botnet can be conceived as a group of compromised computers which can be controlled remotely to execute coordinated attacks or commit fraudulent acts. The fact that Botnets keep continuously evolving means that traditional detection approaches are always one step behind. Recently, the behavior analysis of network traffic has arisen as a way to tackle the Botnet detection problem. The behavioral analysis approach aims to look at the common patterns that Botnets follow across their life cycle, trying to generalize in order to become capable of detecting unseen Botnet traffic. This work provides an analysis of the viability of Recurrent Neural Networks (RNN) to detect the behavior of network traffic by modeling it as a sequence of states that change over time. The recent success applying RNN to sequential data problems makes them a viable candidate on the task of sequence behavior analysis. The performance of a RNN is evaluated considering two main issues, the imbalance of network traffic and the optimal length of sequences. Both issues have a great impact in potentially real-life implementation. Evaluation is performed using a stratified k-fold cross validation and an independent test is conducted on not previously seen traffic belonging to a different Botnet. Preliminary results reveal that the RNN is capable of classifying the traffic with a high attack detection rate and an very small false alarm rate, which makes it a potential candidate for implementation and deployment on real-world scenarios. However, experiments exposed the fact that RNN detection models have problems for dealing with traffic behaviors not easily differentiable as well as some particular cases of imbalanced network traffic.
Computing | 2016
Elina Pacini; Cristian Mateos; Carlos García Garino
Cloud Computing is a promising paradigm for parallel computing. However, as Cloud-based services become more dynamic, resource provisioning in Clouds becomes more challenging. The paradigm, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. In a Cloud, an appropriate number of Virtual Machines (VM) is created and allocated in physical resources for executing jobs. This work focuses on the Infrastructure as a Service (IaaS) model where custom VMs are launched in appropriate hosts available in a Cloud to execute scientific experiments coming from multiple users. Finding optimal solutions to allocate VMs to physical resources is an NP-complete problem, and therefore many heuristics have been developed. In this work, we describe and evaluate two Cloud schedulers based on Swarm Intelligence (SI) techniques, particularly Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). The main performance metrics to study are the number of serviced users by the Cloud and the total number of created VMs in online (non-batch) scheduling scenarios. We also perform a sensitivity analysis by varying the specific-parameter values of each algorithm to evaluate the impact on the performance of the two objective metrics. The intra-Cloud network traffic is also measured. Simulated experiments performed using CloudSim and job data from real scientific problems show that the use of SI-based techniques succeeds in balancing the studied metrics compared to Genetic Algorithms.
Computers & Electrical Engineering | 2015
Emmanuel N. Millán; Carlos S. Bederian; María Fabiana Piccoli; Carlos García Garino; Eduardo M. Bringa
We present a free open source code for Cellular Automata (CA) using MPI.Weak and strong scaling tests are carried out in 3 different architectures.Performance of our code compares well with performance of other mature HPC codes.Hardware counters are used to help identifying performance issues. Display Omitted Cellular Automata (CA) are of interest in several research areas and there are many available serial implementations of CA. However, there are relatively few studies analyzing in detail High Performance Computing (HPC) implementations of CA which allow research on large systems. Here, we present a parallel implementation of a CA with distributed memory based on MPI. As a first step to insure fast performance, we study several possible serial implementations of the CA. The simulations are performed in three infrastructures, comparing two different microarchitectures. The parallel code is tested with both Strong and Weak scaling, and we obtain parallel efficiencies of ~ 75%-85%, for 64 cores, comparable to efficiencies for other mature parallel codes in similar architectures. We report communication time and multiple hardware counters, which reveal that performance losses are related to cache references with misses, branches and memory access.
ieee international conference on high performance computing data and analytics | 2014
David A. Monge; Carlos García Garino
This paper deals with the problem of autoscaling for cloud computing scientific workflows. Autoscaling is a process in which the infrastructure scaling (i.e. determining the number and type of instances to acquire for executing an application) interleaves with the scheduling of tasks for reducing time and monetary cost of executions. This work proposes a novel strategy called Spots Instances Aware Autoscaling (SIAA) designed for the optimized execution of scientific workflow applications. SIAA takes advantage of the better prices of Amazon’s EC2-like spot instances to achieve better performance and cost savings. To deal with execution efficiency, SIAA uses a novel heuristic scheduling algorithm to optimize workflow makespan and reduce the effect of tasks failures that may occur by the use of spot instances. Experiments were carried out using several types of real-world scientific workflows. Results demonstrated that SIAA is able to greatly overcome the performance of state-of-the-art autoscaling mechanisms in terms of makespan (up to 88.0%) and cost of execution (up to 43.6%).