Raphael Guerra
Kaiserslautern University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Raphael Guerra.
international symposium on microarchitecture | 2011
Enrico Bini; Giorgio C. Buttazzo; Johan Eker; Stefan Schorr; Raphael Guerra; Gerhard Fohler; Karl-Erik Årzén; Vanessa Romero; Claudio Scordino
High-performance embedded systems require the execution of many applications on multicore platforms and are subject to stringent restrictions and constraints. The ACTORS project approach provides temporal isolation through resource reservation over a multicore platform, adapting the available resources on the basis of the overall quality requirements. The architecture is fully operational on both ARM MPCore and x86 multicore platforms.
acm symposium on applied computing | 2008
Raphael Guerra; Julius C. B. Leite; Gerhard Fohler
M/M/1 queues have been traditionally used to model several systems like phone calls at a call center, banking services and so on. However, recent studies showed that it does not model properly a web server system. In this paper we investigate the impact of this assumption in providing timeliness constraint in an energy-efficient web server. Although energy efficiency is a key issue, it should not be attained at the expense of a poor quality of service. The work proposed here describes a technique that uses queueing theory results to balance energy consumption and adequate application response times in heterogeneous CPU-intensive server clusters. Moreover, we investigate the I/O impact on a purely CPU-oriented energy saving strategy. This proposal shows that the assumption of a Poisson process is a good approximation to model a web server.
acm symposium on applied computing | 2014
Gustavo Zanatta; Giulio Dariano Bottari; Raphael Guerra; Julius C. B. Leite
Our society has become ever more dependent on large datacenters. Search engines, e-commerce and cloud computing are just some of the broadly used services that rely on large scale datacenters. Datacenter managers are reluctant to non-functional changes on the facilities of a perfectly operational installation as failures can be very expensive. Therefore, one of the big challenges of green computing is how to reduce the energy consumption and environmental impact of such systems without compromising the business. In this work, we propose a thermal monitoring tool for datacenters which is based on a WSN composed of ready-to-use modules. This tool provides a better understanding of the thermal behavior of datacenters and can help datacenter managers, for example, to manually adjust the cooling system in order to avoid energy waste and reduce cost. There is very low intrusiveness to the server facilities, as the tool is 100% independent of the server operability and requires only the setup of small wireless and battery powered sensors. Our tool was implemented and tested on a real datacenter in order to demonstrate the feasibility of our approach.
design, automation, and test in europe | 2012
Raphael Guerra; Gerhard Fohler
Target sensitive tasks have an execution window for feasibility and must execute at a target point in time for maximum utility. In the gravitational task model, a task can express a target point, and the utility decay as a function of the deviation from this point. A method called equilibrium approximates the schedule with maximum utility accrual based on an analogy with physical pendulums. In this paper, we propose a scheduling algorithm for this task model to schedule periodic tasks. The basic idea of our solution is to combine the equilibrium with Earliest Deadline First (EDF) in order to reuse EDFs well studied timeliness analysis. We present simulation results and an example multimedia application to show the benefits of our solution.
symposium on operating systems principles | 2014
Julius C. B. Leite; Raphael Guerra; Rivalino Matias; Antônio Augusto Fröhlich
One year after our first workshop on dependability issues in cloud computing, it is possible to say that cloud adoption reached ubiquity, paraphrasing a 2014 report [2]. In that document, RightScale, a cloud portfolio management company, says that 94% of the organisations they surveyed are running applications or at least experimenting with Infrastructure-as-a-Service. Moreover, 87% of these companies are using public clouds, often following a hybrid cloud approach. In a 2013 report, Verizon said that organisations were no longer using clouds just for development and testing, as production applications accounted for 60% of cloud usage [3]. A last January post in Forbes estimates that US businesses will spend
acm symposium on applied computing | 2011
Raphael Guerra; Gerhard Fohler
13 Billion in cloud computing in 2014 [1]. This level of cloud computing adoption suggests that the time is ripe for research on services and processes for cloud dependability and security. Governments are now aware of the benefits and challenges of cloud computing, as shown by initiatives such as the European Commission’s Cloud Computing Strategy and the U.S. Chief Information Officer and Federal CIO Council cloud.cio.gov . The academic community is not behind with a large number of conferences being promoted by professional societies such as the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE). Consequently, research in the area is thriving. The Second International Workshop on Dependability Issues in Cloud Computing – DISCCO 2013 – aimed to contribute to this trend on research on cloud computing with a focus on dependability and security. This section of the present issue of the Operating Systems Review aims to report the activities of the workshop and present extended versions of two papers selected from its program based on their timeliness and quality.
2011 Brazilian Symposium on Computing System Engineering | 2011
Raphael Guerra; Gerhard Fohler
The gravitational task model allows target sensitive applications (e.g. multimedia and control) to express target points for maximum utility, and a utility decay as a function of the deviation from these points. The compromise among their deviations for maximized utility is calculated based on a physical pendulum analogy. However, in an overloaded system a compromise among all jobs does not necessarily incur in maximized utility. In this paper we propose an overload handling mechanism for the gravitational task model which accounts for both system load and target sensitivity of applications. This mechanism considers the trade-off between aborting and shifting the execution of a job for resulting increased utility accrual and improved resource usage. The heuristic to abort jobs is based on their utility and amount of consumed resources. An aborted job does not yield any utility, but frees resources that can be exploited by other jobs for increased utility accrual. A multimedia case study shows that our mechanism reduces frame display jitter and improves resource usage.
Real-time Systems | 2009
Raphael Guerra; Gerhard Fohler
Adaptive resource management uses resource allocation mechanisms to guarantee a minimum availability of required resources to applications.In this paper, we propose an intuitive and low overhead (linear complexity) bandwidth compression algorithm.Low overhead is necessary for on-line deployment and intuition provides for easy understanding of the solution.The resource allocation is proportional to the resource demand and importance of applications, hence providing for fairness and increased overall quality of service (QoS).Our compression algorithm is optimal and we present a qualitative analysis of the intuition, which is based on an analogy with pendulum systems.
First International Workshop on Adaptive Resource Management | 2010
Vanessa Romero Segovia; Karl-Erik Årzén; Stefan Schorr; Raphael Guerra; Gerhard Fohler; Johan Eker; Harald Gustafsson
Archive | 2010
Raphael Guerra; Stefan Schorr; Gerhard Fohler; Vanessa Romero; Karl-Erik Årzén; Johan Eker