Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Horacio González-Vélez is active.

Publication


Featured researches published by Horacio González-Vélez.


Applied Intelligence | 2009

HealthAgents: distributed multi-agent brain tumor diagnosis and prognosis

Horacio González-Vélez; Mariola Mier; Margarida Julià-Sapé; Theodoros N. Arvanitis; Juan Miguel García-Gómez; Montserrat Robles; Paul H. Lewis; Srinandan Dasmahapatra; David Dupplaw; Andrew Peet; Carles Arús; Bernardo Celda; Sabine Van Huffel; Magí Lluch-Ariet

Abstract We present an agent-based distributed decision support system for the diagnosis and prognosis of brain tumors developed by the HealthAgents project. HealthAgents is a European Union funded research project, which aims to enhance the classification of brain tumors using such a decision support system based on intelligent agents to securely connect a network of clinical centers. The HealthAgents system is implementing novel pattern recognition discrimination methods, in order to analyze in vivo Magnetic Resonance Spectroscopy (MRS) and ex vivo/in vitro High Resolution Magic Angle Spinning Nuclear Magnetic Resonance (HR-MAS) and DNA micro-array data. HealthAgents intends not only to apply forefront agent technology to the biomedical field, but also develop the HealthAgents network, a globally distributed information and knowledge repository for brain tumor diagnosis and prognosis.


web intelligence | 2006

On the Design of a Web-Based Decision Support System for Brain Tumour Diagnosis Using Distributed Agents

Carles Arús; Bernardo Celda; Srinandan Dasmahaptra; David Dupplaw; Horacio González-Vélez; Sabine Van Huffel; Paul H. Lewis; Magí Lluch i Ariet; Mariola Mier; Andrew C. Peet; Montserrat Robles

This paper introduces HealthAgents, an EC-funded research project to improve the classification of brain tumours through multi-agent decision support over a distributed network of local databases or data marts. HealthAgents will not only develop new pattern recognition methods for a distributed classification and analysis of in vivo MRS and ex vivo/in vitro HRMAS and DNA data, but also define a method to assess the quality and usability of a new candidate local database containing a set of new cases, based on a compatibility score


formal methods | 2013

The ParaPhrase Project: Parallel patterns for adaptive heterogeneous multicore systems

Kevin Hammond; Marco Aldinucci; Christopher Brown; Francesco Cesarini; Marco Danelutto; Horacio González-Vélez; Peter Kilpatrick; Rainer Keller; Michael Rossbory; Gilad Shainer

This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism.


computer software and applications conference | 2007

An Adaptive Security Model for Multi-agent Systems and Application to a Clinical Trials Environment

Liang Xiao; Andrew C. Peet; Paul H. Lewis; Srinandan Dashmapatra; Carlos Sáez; Madalina Croitoru; Javier Vicente; Horacio González-Vélez; M. Lluch i Ariet

We present in this paper an adaptive security model for Multi-agent systems. A security meta-model has been developed in which the traditional role concept has been extended. The new concept incorporates the need of both security management as used by role-based access control (RBAC) and agent functional behaviour in agent-oriented Software Engineering (AOSE). Our approach avoids weaknesses of traditional RBAC approaches and provides a practically usable security model for multi-agent systems (MAS). A unified role interaction model framework has been put forward that incorporates not only functional requirements but also security constraints in MAS. A security policy rule scheme has been used to express security requirements in relation to affective roles. The major contribution of the work is that little redevelopment effort will be required when security is to be engineered into the overall MAS architecture, hence minimising the impact of the security requirements changes to the MAS architecture. We illustrate the approach through its potential application in a clinical trial setting involving a prototype medical decision support system, HealthAgents.


Archive | 2011

Intelligent Decision Systems in Large-Scale Distributed Environments

Pascal Bouvry; Horacio González-Vélez; Joanna Kolodziej

One of the most challenging issues for the intelligent decision systems is to effectively manage the large-scale complex distributed environments such as computational clouds, grids, ad hoc and P2P networks, under the different types of users, their relations, and real-world uncertainties. In this context the IT resources and services usually belong to different owners (institutions, enterprises, or individuals) and are managed by different administrators. These administrators conform to different sets of rules and configuration directives, and can impose different usage policies on the system users. This book presents new ideas, analysis, implementations and evaluation of the next generation intelligent techniques for solving complex decision problems in large-scale distributed systems. In 15 chapters several important formulations of the decision problems in heterogeneous environments are identified and a review of the recent approaches, from game theoretical models and computational intelligent techniques, such as genetic, memetic and evolutionary algorithms, to intelligent multi-agent systems and networking are presented. We believe that this volume will serve as a reference for the students, researchers and industry practitioners working in or are interested in joining interdisciplinary works in the areas of intelligent decision systems using emergent distributed computing paradigms. It will also allow newcomers to grasp key concerns and potential solutions on the selected topics. Additionally, uncertainties are presented in various types of information that are incomplete, imprecise, fragmentary or overloading, which hinders the full and precise determination of the evaluation criteria, their subsequent and selection, the assignment scores, and eventually the final integrated decision result.


Future Generation Computer Systems | 2014

Parallel patterns for heterogeneous CPU/GPU architectures: structured parallelism from cluster to cloud

Sonia Campa; Marco Danelutto; Mehdi Goli; Horacio González-Vélez; Alina Madalina Popescu; Massimo Torquati

The widespread adoption of traditional heterogeneous systems has substantially improved the computing power available and, in the meantime, raised optimisation issues related to the processing of task streams across both CPU and GPU cores in heterogeneous systems. Similar to the heterogeneous improvement gained in traditional systems, cloud computing has started to add heterogeneity support, typically through GPU instances, to the conventional CPU-based cloud resources. This optimisation of cloud resources will arguably have a real impact when running on-demand computationally-intensive applications. In this work, we investigate the scaling of pattern-based parallel applications from physical, “local” mixed CPU/GPU clusters to a public cloud CPU/GPU infrastructure. Specifically, such parallel patterns are deployed via algorithmic skeletons to exploit a peculiar parallel behaviour while hiding implementation details. We propose a systematic methodology to exploit approximated analytical performance/cost models, and an integrated programming framework that is suitable for targeting both local and remote resources to support the offloading of computations from structured parallel applications to heterogeneous cloud resources, such that performance values not available on local resources may be actually achieved with the remote resources. The amount of remote resources necessary to achieve a given performance target is calculated through the performance models in order to allow any user to hire the amount of cloud resources needed to achieve a given target performance value. Thus, it is therefore expected that such models can be used to devise the optimal proportion of computations to be allocated on different remote nodes for Big Data computations. We present different experiments run with a proof of-concept implementation based on FastFlowon small departmental clusters as well as on a public cloud infrastructure with CPU and GPU using the Amazon Elastic Compute Cloud. In particular, we show how CPU-only and mixed CPU/GPU computations can be offloaded to remote cloud resources with predictable performances and how data intensive applications can be mapped to a mix of local and remote resources to guarantee optimal performances.


parallel computing | 2006

Self-adaptive skeletal task farm for computational grids

Horacio González-Vélez

In this work, we introduce a self-adaptive task farm for computational grids which is based on a single-round scheduling algorithm called dynamic deal. In principle, the dynamic deal approach employs skeletal forecasting information to automatically instrument the task farm scheduling and determine the amount of work assigned to each worker at execution time, allowing the farm to adapt effectively to different load and network conditions in the grid. In practice, it uses self-generated predictive execution values and maps tasks onto the different nodes in a single-round. The effectiveness of this approach is illustrated with a computational biology parameter sweep in a non-dedicated departmental grid.


International Journal of Applied Mathematics and Computer Science | 2011

Performance evaluation of MapReduce using full virtualisation on a departmental cloud

Horacio González-Vélez; Maryam Kontagora

Performance evaluation of MapReduce using full virtualisation on a departmental cloud This work analyses the performance of Hadoop, an implementation of the MapReduce programming model for distributed parallel computing, executing on a virtualisation environment comprised of 1+16 nodes running the VMWare workstation software. A set of experiments using the standard Hadoop benchmarks has been designed in order to determine whether or not significant reductions in the execution time of computations are experienced when using Hadoop on this virtualisation platform on a departmental cloud. Our findings indicate that a significant decrease in computing times is observed under these conditions. They also highlight how overheads and virtualisation in a distributed environment hinder the possibility of achieving the maximum (peak) performance.


Journal of Scheduling | 2010

Adaptive statistical scheduling of divisible workloads in heterogeneous systems

Horacio González-Vélez; Murray Cole

This article presents a statistical approach to the scheduling of divisible workloads. Structured as a task farm with different scheduling modes including adaptive single and multi-round scheduling, this novel divisible load theory approach comprises two phases, calibration and execution, which dynamically adapt the installment size and number. It introduces the concept of a generic installment factor based on the statistical dispersion of the calibration times of the participating nodes, which allows automatic determination of the number and size of the workload installments. Initially, the calibration ranks processors according to their fitness and determines an installment factor based on how different their execution times are. Subsequently, the execution iteratively distributes the workload according to the processor fitness, which is continuously re-assessed throughout the program execution. Programmed as an adaptive algorithmic skeleton, our task farm has been successfully evaluated for single-round scheduling and generic multi-round scheduling using a computational biology parameter-sweep in a non-dedicated multi-cluster system.


complex, intelligent and software intensive systems | 2010

Benchmarking a MapReduce Environment on a Full Virtualisation Platform

Maryam Kontagora; Horacio González-Vélez

This work analyses the performance of Hadoop, an implementation of the MapReduce programming model for distributed parallel computing, executing on a virtualisation environment comprised of 1+16 nodes running the VMWare workstation software. A set of experiments using the standard Hadoop benchmarks has been designed in order to determine whether or not significant reductions in the execution time of computations are experienced using Hadoop on this virtualisation platform on a local area network. Our findings indicate that a significant decrease in computing times is observed under these conditions. They also highlight how overheads and virtualisation in a distributed environment hinder the possibility of achieving the maximum (peak) performance.

Collaboration


Dive into the Horacio González-Vélez's collaboration.

Top Co-Authors

Avatar

David Dupplaw

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Paul H. Lewis

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Andrew C. Peet

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Gibb

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Mehdi Goli

Robert Gordon University

View shared research outputs
Top Co-Authors

Avatar

Murray Cole

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joanna Kolodziej

University of Bielsko-Biała

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge