Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luís Veiga is active.

Publication


Featured researches published by Luís Veiga.


middleware for grid computing | 2008

Heuristic for resources allocation on utility computing infrastructures

João Nuno de Oliveira e Silva; Luís Veiga; Paulo Ferreira

The use of utility on-demand computing infrastructures, such as Amazons Elastic Clouds [1], is a viable solution to speed lengthy parallel computing problems to those without access to other cluster or grid infrastructures. With a suitable middleware, bag-of-tasks problems could be easily deployed over a pool of virtual computers created on such infrastructures. In bag-of-tasks problems, as there is no communication between tasks, the number of concurrent tasks is allowed to vary over time. In a utility computing infrastructure, if too many virtual computers are created, the speedups are high but may not be cost effective; if too few computers are created, the cost is low but speedups fall below expectations. Without previous knowledge of the processing time of each task, it is difficult to determine how many machines should be created. In this paper, we present an heuristic to optimize the number of machines that should be allocated to process tasks so that for a given budget the speedups are maximal. We have simulated the proposed heuristics against real and theoretical workloads and evaluated the ratios between number of allocated hosts, charged times, speedups and processing times. With the proposed heuristics, it is possible to obtain speedups in line with the number of allocated computers, while being charged approximately the same predefined budget.


acm ifip usenix international conference on middleware | 2007

Vector-field consistency for ad-hoc gaming

Nuno C. Santos; Luís Veiga; Paulo Ferreira

Developing distributed multiplayer games for ad-hoc networks is challenging. Consistency of the replicated shared state is hard to ensure at a low cost. Current consistency models and middleware systems lack the required adaptability and efficiency when applied to ad-hoc gaming. Hence, developing such robust applications is still a daunting task. We propose i) Vector-Field Consistency (VFC), a new consistency model, and ii) the Mobihoc middleware to ease the programming effort of these games, while ensuring the consistency of replicated objects. VFC unifies i) several forms of consistency enforcement and a multi-dimensional criteria (time, sequence and value) to limit replica divergence, with ii) techniques based on locality-awareness (w.r.t. players position). Mobihoc adopts VFC and provides game programmers the abstractions to manage game state easily and efficiently. A Mobihoc prototype and a demonstrating game were developed and evaluated. The results obtained are very encouraging.


conference on object-oriented programming systems, languages, and applications | 2014

Rubah: DSU for Java on a stock JVM

Luís Pina; Luís Veiga; Michael Hicks

This paper presents Rubah, the first dynamic software updating system for Java that: is portable, implemented via libraries and bytecode rewriting on top of a standard JVM; is efficient, imposing essentially no overhead on normal, steady-state execution; is flexible, allowing nearly arbitrary changes to classes between updates; and isnon-disruptive, employing either a novel eager algorithm that transforms the program state with multiple threads, or a novel lazy algorithm that transforms objects as they are demanded, post-update. Requiring little programmer effort, Rubah has been used to dynamically update five long-running applications: the H2 database, the Voldemort key-value store, the Jake2 implementation of the Quake 2 shooter game, the CrossFTP server, and the JavaEmailServer.


wireless and mobile computing, networking and communications | 2013

Clouds of small things: Provisioning infrastructure-as-a-service from within community networks

Amin M. Khan; Leandro Navarro; Leila Sharifi; Luís Veiga

Community networks offer a shared communication infrastructure where communities of citizens build and own open networks. While the IP connectivity of the networking devices is successfully achieved, the number of services and applications available from within the community network is typically small and the usage of the community network is often limited to providing Internet access to remote areas through wireless links. In this paper we propose to apply the principle of resource sharing of community networks, currently limited to the network bandwidth, to other computing resources, which leads to cloud computing in community networks. Towards this vision, we review some characteristics of community networks and identify potential scenarios for community clouds. We simulate a cloud computing infrastructure service and discuss different aspects of its performance in comparison to a commercial centralized cloud system. We note that in community clouds the computing resources are heterogeneous and less powerful, which affects the time needed to assign resources. Response time of the infrastructure service is high in community clouds even for a small number of resources since resources are distributed, but tends to get closer to that of a centralized cloud when the number of resources requested increases. Our initial results suggest that the performance of the community clouds highly depends on the community network conditions, but has some potential for improvement with network-aware cloud services. The main strength compared to commercial cloud services, however, is that community cloud services hosted on community-owned resources will follow the principles of community network and will be neutral and open.


international parallel and distributed processing symposium | 2005

Asynchronous complete distributed garbage collection

Luís Veiga; Paulo Ferreira

Most distributed garbage collection (DGC) algorithms are not complete as they fail to reclaim distributed cycles of garbage. Those that achieve such a level of completeness are very costly as they require either some kind of synchronization or consensus between processes. Others use mechanisms such as backtracking, global counters, a central server, distributed tracing phases, and/or impose additional load and restrictions on local garbage collection. All these approaches hinder scalability and/or performance significantly. We propose a solution to this problem, i.e., we describe a DGC algorithm capable of reclaiming distributed cycles of garbage asynchronously and efficiently. Our algorithm does not require any particular coordination between processes and it tolerates message loss. We have implemented the algorithm both on Rotor (a free source version of Microsoft .Net) and on OBIWAN (a platform supporting mobile agents, object replication and remote invocation); we observed that applications are not disrupted.


Journal of Internet Services and Applications | 2013

Internet-scale support for map-reduce processing

Fernando Albuquerque Costa; Luís Veiga; Paulo Ferreira

Volunteer Computing systems (VC) harness computing resources of machines from around the world to perform distributed independent tasks. Existing infrastructures follow a master/worker model, with a centralized architecture. This limits the scalability of the solution due to its dependence on the server. Our goal is to create a fault-tolerant VC platform that supports complex applications, by using a distributed model which improves performance and reduces the burden on the server.In this paper we present VMR, a VC system able to run MapReduce applications on top of volunteer resources, spread throughout the Internet. VMR leverages users’ bandwidth through the use of inter-client communication, and uses a lightweight task validation mechanism. We describe VMR’s architecture and evaluate its performance by executing several MapReduce applications on a wide area testbed.Our results show that VMR successfully runs MapReduce tasks over the Internet. When compared to an unmodified VC system, VMR obtains a performance increase of over 60% in application turnaround time, while reducing server bandwidth use by two orders of magnitude and showing no discernible overhead.


acm symposium on applied computing | 2003

RepWeb: replicated Web with referential integrity

Luís Veiga; Paulo Ferreira

Replication of web content, through mirroring of web sites or browsing off-line content, is one of the most used techniques to increase content availability, reduce network bandwidth usage and minimize browsing delays in the world-wide-web.The world-wide-web does not support referential integrity, i.e., broken links do exist. This has been considered, for some years now, one of the most serious problems of the web. This is true in various fields, e.g.: i) if a user pays for some service in the form of web pages, he requires such pages to be reachable all the time, and ii) archived web resources, either scientific, legal or historic, that are still referenced, need to be preserved and remain available.Current approaches to the broken-link problem are not able to preserve referential integrity on the web and, simultaneously, support replication and minimize storage waste due to memory leaks. Some of them also impose specific authoring and management systems. Thus, the limitations of current systems reside in three issues: transparency, completeness and safety.We propose a system, RepWeb, comprised of an application to access and manage replicated web content and an implementation of an acyclic distributed garbage collection algorithm for wide-area replicated memory, that satisfies all these requirements. It supports replication, enforces referential integrity on the web and minimizes storage waste.


IEEE Transactions on Parallel and Distributed Systems | 2003

OBIWAN: design and implementation of a middleware platform

Paulo Ferreira; Luís Veiga; Carlos Ribeiro

Programming distributed applications supporting data sharing is very hard. In most middleware platforms, programmers must deal with system-level issues for which they do not have the adequate knowledge, e.g., object replication, abusive resource consumption by mobile agents, and distributed garbage collection. As a result, programmers are diverted from their main task: the application logic. In addition, given that such system-level issues are extremely error-prone, programmers spend inumerous hours debugging. We designed, implemented, and evaluated a middleware platform called OBIWAN that releases the programmer from the above mentioned system-level issues. OBIWAN has the following distinctive characteristics: 1) allows the programmer to develop applications using either remote object invocation, object replication, or mobile agents, according to the specific needs of applications, 2) supports automatic object replication (e.g., incremental on-demand replication, transparent object faulting and serving, etc.), 3) supports distributed garbage collection of useless replicas, and 4) supports the specification and enforcement of history-based security policies well adapted to mobile agents needs (e.g., preventing abusive resource consumption).


international conference on distributed computing systems | 2002

Incremental replication for mobility support in OBIWAN

Luís Veiga; Paulo Ferreira

The need for sharing is well known in a large number of distributed collaborative applications. These applications are difficult to develop for wide area (possibly mobile) networks because of slow and unreliable connections. For this purpose, we developed a platform called OBIWAN that: i) allows the application to decide, in run-time, the mechanism by which objects should be invoked, remote method invocation or invocation on a local replica, ii) allows incremental replication of large object graphs, iii) allows the creation of dynamic clusters of data, and iv) provides hooks for the application programmer to implement a set of application specific properties such as relaxed transactional support or updates dissemination. These mechanisms allow an application to deal with situations that frequently occur in a (mobile) wide-area network, such as disconnections and slow links: i) as long as objects needed by an application (or by an agent) are colocated, there is no need to be connected to the network, and ii) it is possible to replace, in run-time, remote by local invocations on replicas, thus improving the performance and adaptability of applications. The prototype is developed in Java, is very small and simple to use, the performance results are very encouraging, and existing applications can be easily modified to take advantage of OBIWAN.


international parallel and distributed processing symposium | 2010

Service and resource discovery in cycle-sharing environments with a utility algebra

João Nuno de Oliveira e Silva; Paulo Ferreira; Luís Veiga

The Internet has witnessed a steady and widespread increase in available idle computing cycles and computing resources in general. Such available cycles simultaneously allow and foster the increase in development of existing and new computationally demanding applications, driven by algorithm complexity, intensive data processing, or both. Available cycles may be harvested from several scenarios, ranging from college or office LANs, cluster, grid and utility or cloud computing infrastructures, to peer-to-peer overlay networks. Existing resource discovery protocols have a number of shortcomings for the existing variety of cycle sharing scenarios. They either (i) were designed to return only a binary answer stating whether a remote computer fulfills the requirements, (ii) rely on centralized schedulers (or coherently replicated) that are impractical in certain environments such as peer-to-peer computing, (iii) they are not extensible as it is impossible to define new resources to be discovered and evaluated or new ways to evaluate them. In this paper we present a novel, extensible, expressive, and flexible requirement specification algebra and resource discovery middleware. Besides standard resources (CPU, memory, network bandwidth,...), application developers may define new resource requirements and new ways to evaluate them. Application programmers can write complex requirements (that evaluate several resources) using fuzzy logic operators. Each resource evaluation (either standard or specially coded) returns a value between 0.0 and 1.0 stating the capacity to (partially) fulfill the requirement, considering client-specific utility depreciation (i.e., partial-utility, a downgraded measure of how the user assesses the available resources) and policies for combined utility evaluation. By comparing the values obtained from the various hosts, it is possible to precisely know which ones best fulfill each clients needs, regarding a set of required resources.

Collaboration


Dive into the Luís Veiga's collaboration.

Top Co-Authors

Avatar

Paulo Ferreira

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Felix Freitag

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Sérgio Esteves

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Mennan Selimi

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos Ribeiro

Instituto Superior Técnico

View shared research outputs
Researchain Logo
Decentralizing Knowledge