Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patricio Domingues is active.

Publication


Featured researches published by Patricio Domingues.


Future Generation Computer Systems | 2007

Sabotage-tolerance and trust management in desktop grid computing

Patricio Domingues; Bruno de Sousa; Luís Moura Silva

The success of grid computing in open environments like the Internet is highly dependent on the adoption of mechanisms to detect failures and malicious sabotage attempts. It is also required to maintain a trust management system that permits one to distinguish the trustable from the non-trustable participants in a global computation. Without these mechanisms, users with data-critical applications will never rely on desktop grids, and will rather prefer to support higher costs to run their computations in closed and secure computing systems. This paper discusses the topics of sabotage-tolerance and trust management. After reviewing the state-of-the-art, we present two novel techniques: a mechanism for sabotage detection and a protocol for distributed trust management. The proposed techniques are targeted at the paradigm of volunteer-based computing commonly used on desktop grids.


european conference on parallel processing | 2007

Characterizing result errors in internet desktop grids

Derrick Kondo; Filipe Araujo; Paul Malecot; Patricio Domingues; Luís Moura Silva; Gilles Fedak; Franck Cappello

Desktop grids use the free resources in Intranet and Internet environments for large-scale computation and storage. While desktop grids offer a high return on investment, one critical issue is the validation of results returned by participating hosts. Several mechanisms for result validation have been previously proposed. However, the characterization of errors is poorly understood. To study error rates, we implemented and deployed a desktop grid application across several thousand hosts distributed over the Internet. We then analyzed the results to give quantitative and empirical characterization of errors stemming from input or output (I/O) failures. We find that in practice, error rates are widespread across hosts but occur relatively infrequently. Moreover, we find that error rates tend to not be stationary over time nor correlated between hosts. In light of these characterization results, we evaluated state-of-the-art error detection mechanisms and describe the trade-offs for using each mechanism.


international conference on parallel processing | 2005

Resource usage of Windows computer laboratories

Patricio Domingues; Paulo Marques; Luís Moura Silva

Studies focusing on Unix have shown that the vast majority of workstations and desktop computers remain idle for most of the time. In this paper we quantify the usage of main resources (CPU, main memory, disk space and network bandwidth) of Windows 2000 machines from classroom laboratories. For that purpose, 169 machines of 11 classroom laboratories were monitored over 77 consecutive days. Samples were collected from all machines every 15 minutes for a total of 583653 samples. Besides evaluating availability of machines (uptime and downtime) and usage habits of users, the paper assesses usage of main resources, focusing on the impact of interactive login sessions over resource consumptions. Also, resorting to Self Monitoring Analysis and Reporting Technology (SMART) parameters of hard disks, the study estimates the average uptime per hard drive power cycle for the whole life of monitored computers. Our results show that resources idleness in classroom computers is very high, with an average CPU idleness of 97.9%, unused memory averaging 42.1% and unused disk space of the order of gigabytes per machine. Moreover, this study confirms the 2:1 equivalence rule found out by similar works, with N non-dedicated resources delivering an average CPU computing power roughly similar to N/2 dedicated machines. These results confirm the potentiality of these systems for resource harvesting, especially for grid desktop computing schemes.


international parallel and distributed processing symposium | 2009

Evaluating the performance and intrusiveness of virtual machines for desktop grid computing

Patricio Domingues; Filipe Araujo; Luís Moura Silva

We experimentally evaluate the performance overhead of the virtual environments VMware Player, QEMU, VirtualPC and VirtualBox on a dual-core machine. Firstly, we assess the performance of a Linux guest OS running on a virtual machine by separately benchmarking the CPU, file I/O and the network bandwidth. These values are compared to the performance achieved when applications are run on a Linux OS directly over the physical machine. Secondly, we measure the impact that a virtual machine running a volunteer @home project worker causes on a host OS. Results show that performance attainable on virtual machines depends simultaneously on the virtual machine software and on the application type, with CPU-bound applications much less impacted than IO-bound ones. Additionally, the performance impact on the host OS caused by a virtual machine using all the virtual CPU, ranges from 10% to 35%, depending on the virtual environment.


Journal of Grid Computing | 2009

Defeating Colluding Nodes in Desktop Grid Computing Platforms

Gheorghe Cosmin Silaghi; Filipe Araujo; Luís Moura Silva; Patricio Domingues; Alvaro Arenas

Desktop Grid systems reached a preeminent place among the most powerful computing platforms in the planet. Unfortunately, they are extremely vulnerable to mischief, because computing projects exert no administrative or technical control on volunteers. These can very easily output bad results, due to software or hardware glitches (resulting from over-clocking for instance), to get unfair computational credit, or simply to ruin the project. To mitigate this problem, Desktop Grid servers replicate work units and apply majority voting, typically on 2 or 3 results. In this paper, we observe that simple majority voting is powerless against malicious volunteers that collude to attack the project. We argue that to identify this type of attack and to spot colluding nodes, each work unit needs at least 3 voters. In addition, we propose to post-process the voting pools in two steps. i) In the first step, we use a statistical approach to identify nodes that were not colluding, but submitted bad results; ii) then, we use a rather simple principle to go after malicious nodes which acted together: they might have won conflicting voting pools against nodes that were not identified in step i. We use simulation to show that our heuristic can be quite effective against colluding nodes, in scenarios where honest nodes form a majority.


network operations and management symposium | 2006

Predicting Machine Availabilities in Desktop Pools

Artur Andrzejak; Patricio Domingues; Luís Moura Silva

This paper describes a study of predicting machine availabilities and user presence in a pool of desktop computers. The study is based on historical traces collected from 32 machines, and shows that robust prediction accuracy can be achieved even in this highly volatile environment. The employed methods include a multitude of classification methods known from data mining, such as Bayesian methods and support vector machines. Further contribution is a time series framework used in the study which automates correlations search and attribute selection, and allows for easy reconfiguration and efficient prediction. The results illustrate the utility of prediction techniques in highly dynamic computing environments. Potential applications for proactive management of desktop pools are discussed


parallel, distributed and network-based processing | 2006

DGSchedSim: a trace-driven simulator to evaluate scheduling algorithms for desktop grid environments

Patricio Domingues; Paulo Marques; Luís Moura Silva

This paper describes DGSchedSim, a trace driven simulator to evaluate scheduling algorithms focused on minimising turnaround time of applications executed in heterogeneous desktop grid systems. The simulator can be used to model task farming applications comprised of a set of independent and equal sized tasks similarly to numerous @Home public computing projects like the popular SETI@Home. DGSchedSim permits to assess scheduling policies under several scenarios allowing to control parameters such as the applications requirements (number of tasks, individual requirements of tasks like needed CPU time), the properties of the environment (machines computing capabilities and availabilities) and the characteristics of the execution (frequency and storage location of checkpoints, etc.). The simulations are driven by traces collected from real desktop grid systems. Besides DGSchedSim, the paper presents the Cluster Ideal Execution Time (CIET) algorithm that computes the ideal wall-clock time required by a fully dedicated and totally reliable cluster of M heterogeneous machines to process the T tasks of an application. As a test to the simulator capabilities, the paper analyses the suitability of two scheduling algorithms, FCFS and MinMax, for delivering fast turnaround time in desktop grids. Both algorithms, when combined with a centrally stored checkpoint policy, achieve efficiency close to 50% of CIET for certain scenarios.


advanced information networking and applications | 2006

Sharing checkpoints to improve turnaround time in desktop grid computing

Patricio Domingues; João Gabriel Silva; Luís Moura Silva

In this paper, we present a checkpoint sharing methodology to improve turnaround time of applications run over desktop grid environments. In fact, volatility of desktop grid nodes reduces the efficiency of such environments when a fast turnaround time is sought, since a task might get stalled if its assigned machine remains unavailable for a somewhat long period of time (long at the scale of the computation). The rationale behind our approach is to permit checkpoint reuse, so that when a computation is forced to move from one node to another, it can be restarted from an intermediary point provided by the last saved checkpoint. We study the effects of sharing checkpoints in application turnaround time simulating three scheduling algorithms based on first come first served: FCFS, FCFS-AT and FCFS-TR. The targeted environment consists of institutional desktop grids. Our results show that sharing checkpoints is particularly effective in volatile environments, yielding performance improvement up to three times relatively to schemes based on private checkpoints. Furthermore, for non-volatile environments, a simple timeout strategy produces good results.


Journal of Parallel and Distributed Computing | 2011

A maximum independent set approach for collusion detection in voting pools

Filipe Araujo; Jorge Farinha; Patricio Domingues; Gheorghe Cosmin Silaghi; Derrick Kondo

a b s t r a c t From agreement problems to replicated software execution, we frequently find scenarios with voting pools. Unfortunately, Byzantine adversaries can join and collude to distort the results of an election. We address the problem of detecting these colluders, in scenarios where they repeatedly participate in voting decisions. We investigate different malicious strategies, such as naive or colluding attacks, with fixed identifiers or in whitewashing attacks. Using a graph-theoretic approach, we frame collusion detection as a problem of identifying maximum independent sets. We then propose several new graph- based methods and show, via analysis and simulations, their effectiveness and practical applicability for collusion detection.


international conference on e science | 2006

Using Checkpointing to Enhance Turnaround Time on Institutional Desktop Grids

Patricio Domingues; Artur Andrzejak; Luís Moura Silva

In this paper, we present a checkpoint-based scheme to improve the turnaround time of bag-of-tasks applications executed on institutional desktop grids. We propose to share checkpoints among desktop machines in order to reduce the negative impact of resource volatility. Several scheduling policies are evaluated in our study: FCFS, adaptive timeouts, simple replication, replication with checkpoint on demand, and prediction-based checkpointing combined with replication. We used a set of real traces collected from an academic desktop grid environment to perform trace-driven simulations of the proposed scheduling algorithms. The results show that using a shared checkpoint approach may considerably reduce the turnaround time of the applications when compared to the private checkpoints methodology.

Collaboration


Dive into the Patricio Domingues's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nuno M. M. Rodrigues

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar

Sérgio M. M. de Faria

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gheorghe Cosmin Silaghi

Science and Technology Facilities Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pedro M. M. Pereira

Polytechnic Institute of Leiria

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge