George Lima
Federal University of Bahia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by George Lima.
real-time systems symposium | 2011
Paul Regnier; George Lima; Ernesto Massa; Greg Levin; Scott A. Brandt
Optimal multiprocessor real-time schedulers incur significant overhead for preemptions and migrations. We present RUN, an efficient scheduler that reduces the multiprocessor problem to a series of uniprocessor problems. RUN significantly outperforms existing optimal algorithms with an upper bound of O(\log m) average preemptions per job on m processors (less than 3 per job in all of our simulated task sets) and reduces to Partitioned EDF whenever a proper partitioning is found.
Operating Systems Review | 2008
Paul Regnier; George Lima; Luciano Porto Barreto
Several real-time Linux extensions are available nowadays. Two of those extensions that have received special attention recently are Preempt-RT and Xenomai. This paper evaluates to what extent they provide deterministic guarantees when reacting to external events, an essential characteristic when it comes to real-time systems. For this, we define two simple experimental approaches. Our results indicate that Preempt-RT is more prone to temporal variations than Xenomai when the system is subject to overload scenarios.
euromicro conference on real-time systems | 2016
George Lima; Dario Dias; Edna Barros
Extreme Value Theory (EVT) is a powerful statistical framework for estimating maximum values of random variables and has recently been applied for deriving probabilistic bounds on task execution times (pWCET). Task execution time data are collected from measurements and the maximum measured values are fit to an extreme value model. In this paper we provide a careful study on the applicability and effectiveness of EVT in this application field. The study is based on extensive experiments for which we have designed an embedded platform equipped with random cache of configurable sizes. Based on evidences of the experiments, we provide the following contributions: we give a new definition of pWCET that conforms with the fact that pWCET estimates depend on input data distribution used during analysis, we show that using the Generalized Extreme Value (GEV) distribution is necessary since the more restrictive modeling, based on the Gumbel distribution, may yield unsafe or over-estimated values of pWCET, we confirm that hardware randomization favors the applicability of EVT, although it does not ensure it since the distribution of maxima for execution time data are not guaranteed to be analyzable via EVT.
international performance computing and communications conference | 2011
Antonio M. Carianha; Luciano Porto Barreto; George Lima
Providing location privacy to users is one of the important issues that must be addressed in Vehicular Ad-Hoc Networks. Recent solutions address it by using cryptographic “mix-zones”, which are anonymizing regions where nodes change their temporary identities (pseudonym) without being tracked. However, existing solutions are vulnerable to internal attackers since within a mix-zone messages are encrypted using a group secret key. In this paper we improve location privacy of mix-zones via extensions to the CMIX protocol. By carrying out extensive simulations, we investigate and compare the effective location privacy provided by the proposed approach.
IEEE Transactions on Industrial Informatics | 2010
Eduardo Camponogara; Augusto Born de Oliveira; George Lima
The complexity of real-time systems has substantially increased in the past few years regarding both hardware and software aspects. The use of modern sensors, able to capture image and audio data, demands predictable multimedia-like data processing. Moreover, applications like autonomous robots, surveillance, or modern multimedia players may well be characterized by several operation modes, each one associated with light conditions, vision angle, change in user requirements, etc. In this paper, we describe suitable scheduling mechanisms that address these aspects. Application modes are characterized by their required processing bandwidth and benefit values. By using bandwidth reservation schedulers, dynamic reconfiguring scheduling parameters is seen as an optimization problem whose goal is to maximize the overall system benefit subject to schedulability constraints. Two different models for the problem are defined, Discrete and Continuous. The former gives rise to an NP-Hard problem for which efficient approximate solutions are derived. An optimal and polynomial solution to the Continuous model is derived. Both models are then extended to incorporate task execution times described as probability distributions. Making use of this stochastic modeling one is able to dynamically reconfigure the scheduler subject to probabilistic schedulability guarantees. The derived solutions are evaluated by extensive simulation, which indicates the good performance of the proposed reconfiguration mechanisms.
euromicro conference on real-time systems | 2008
George Lima; Eduardo Camponogara; Ana Carolina Sokolonski
Modern real-time systems must be designed to be highly adaptable, reacting to aperiodic events in a predictable manner and exhibiting graceful degradation in overload scenarios whenever needed. In this context, it is useful to structure the system as a set of multiversion tasks. Task versions can be modeled to implement services with various levels of quality. In overload scenarios, for instance, a lower quality service may be scheduled for execution keeping the system correctness and providing graceful degradation. The goal of the reconfiguration mechanism is to select the versions of tasks that lead to the maximum benefit for the system at runtime. In this paper, we provide a schedulability condition based on which we derive an optimal pseudo-polynomial solution for this problem. Then, a faster approximation solution is described. Results from simulation indicate the effectiveness of the proposed approach.
latin american symposium on dependable computing | 2005
George Lima; Alan Burns
We describe an approach to scheduling hard real-time tasks taking into account fault scenarios. All tasks are scheduled at run-time according to their fixed priorities, which are determined off-line. Upon error-detection, special tasks are released to perform error-recovery actions. We allow error-recovery actions to be executed at higher priority levels so that the fault resilience of the task set can be increased. To do so, we extend the well known response time analysis technique and describe a non-standard priority assignment policy. Results from simulation indicate that the fault resilience of the task sets can be significantly increased by using the proposed approach.
real-time systems symposium | 2003
George Lima; Alan Burns
Consensus is known to be a fundamental problem in fault-tolerant distributed systems. Solving this problem provides the means for distributed processes to agree on a single value. This, however, requires extra communication efforts. For some real-time communication networks such efforts may have undesirable performance implications due to their limited bandwidth. This is certainly the case with the controller area network (CAN), which is widely used to support real-time systems. This paper shows how some underlying properties of CAN can be used to solve the consensus problem. The proposed consensus protocol tolerates the maximum number of process crashes, is efficient and flexible. The described solution is proved correct, its complexity is analyzed and its performance is evaluated by simulation.
real-time systems symposium | 2013
J. Augusto Santos; George Lima; Konstantinos Bletsas; Shinpei Kato
We present HIME, a new EDF-based semi-partitioned scheduling algorithm which allows at most one migrating task per processor. In a system with m processors, this arrangement limits the migrating tasks to at most m/2 and the number of migrations per job to at most m-1. HIME has a utilisation bound of at least 74.9%, and can be configured to achieve 75%, the theoretical limit for semi-partitioned schemes with at most m/2 migrating tasks. Experiments show that the average system utilisation achieved by HIME is about 95%.
Innovations in Systems and Software Engineering | 2010
André Luís Nunes Muniz; Aline Maria Santos Andrade; George Lima
A new tool for integrating formal methods, particularly model checking, in the development process of component-based real-time systems specified in UML is proposed. The described tool, TANGRAM (Tool for Analysis of Diagrams), performs automatic translation from UML diagrams into timed automata, which can be verified by the UPPAAL model checker. We focus on the CORBA Component Model. We demonstrate the overall process of our approach, from system design to verification, using a simple but real application, used in train control systems. Also, a more complex case study regarding train control systems is described.