Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luiz A. Celiberto is active.

Publication


Featured researches published by Luiz A. Celiberto.


robot soccer world cup | 2008

Heuristic Reinforcement Learning Applied to RoboCup Simulation Agents

Luiz A. Celiberto; Carlos H. C. Ribeiro; Anna Helena Reali Costa; Reinaldo A. C. Bianchi

This paper describes the design and implementation of robotic agents for the RoboCup Simulation 2D category that learns using a recently proposed Heuristic Reinforcement Learning algorithm, the Heuristically Accelerated Q---Learning (HAQL). This algorithm allows the use of heuristics to speed up the well-known Reinforcement Learning algorithm Q---Learning. A heuristic function that influences the choice of the actions characterizes the HAQL algorithm. A set of empirical evaluations was conducted in the RoboCup 2D Simulator, and experimental results show that even very simple heuristics enhances significantly the performance of the agents.


international joint conference on artificial intelligence | 2011

Using cases as heuristics in reinforcement learning: a transfer learning application

Luiz A. Celiberto; Jackson P. Matsuura; Ramon López de Mántaras; Reinaldo A. C. Bianchi

In this paper we propose to combine three AI techniques to speed up a Reinforcement Learning algorithm in a Transfer Learning problem: Case-based Reasoning, Heuristically Accelerated Reinforcement Learning and Neural Networks. To do so, we propose a new algorithm, called L3, which works in 3 stages: in the first stage, it uses Reinforcement Learning to learn how to perform one task, and stores the optimal policy for this problem as a case-base; in the second stage, it uses a Neural Network to map actions from one domain to actions in the other domain and; in the third stage, it uses the case-base learned in the first stage as heuristics to speed up the learning performance in a related, but different, task. The RL algorithm used in the first phase is the Q-learning and in the third phase is the recently proposed Case-based Heuristically Accelerated Q-learning. A set of empirical evaluations were conducted in transferring the learning between two domains, the Acrobot and the Robocup 3D: the policy learned during the solution of the Acrobot Problem is transferred and used to speed up the learning of stability policies for a humanoid robot in the Robocup 3D simulator. The results show that the use of this algorithm can lead to a significant improvement in the performance of the agent.


Artificial Intelligence | 2015

Transferring knowledge as heuristics in reinforcement learning

Reinaldo A. C. Bianchi; Luiz A. Celiberto; Paulo E. Santos; Jackson P. Matsuura; Ramon López de Mántaras

The goal of this paper is to propose and analyse a transfer learning meta-algorithm that allows the implementation of distinct methods using heuristics to accelerate a Reinforcement Learning procedure in one domain (the target) that are obtained from another (simpler) domain (the source domain). This meta-algorithm works in three stages: first, it uses a Reinforcement Learning step to learn a task on the source domain, storing the knowledge thus obtained in a case base; second, it does an unsupervised mapping of the source-domain actions to the target-domain actions; and, third, the case base obtained in the first stage is used as heuristics to speed up the learning process in the target domain.A set of empirical evaluations were conducted in two target domains: the 3D mountain car (using a learned case base from a 2D simulation) and stability learning for a humanoid robot in the Robocup 3D Soccer Simulator (that uses knowledge learned from the Acrobot domain). The results attest that our transfer learning algorithm outperforms recent heuristically-accelerated reinforcement learning and transfer learning algorithms.


portuguese conference on artificial intelligence | 2007

Heuristic Q-learning soccer players: a new reinforcement learning approach to RoboCup simulation

Luiz A. Celiberto; Jackson P. Matsuura; Reinaldo A. C. Bianchi

This paper describes the design and implementation of a 4 player RoboCup Simulation 2D team, which was build by adding Heuristic Accelerated Reinforcement Learning capabilities to basic players of the well-known UvA Trilearn team. The implemented agents learn by using a recently proposed Heuristic Reinforcement Learning algorithm, the Heuristically Accelerated Q-Learning (HAQL), which allows the use of heuristics to speed up the well-known Reinforcement Learning algorithm Q-Learning. A set of empirical evaluations was conducted in the RoboCup 2D Simulator, and experimental results obtained while playing with other teams shows that the approach adopted here is very promising.


latin american robotics symposium | 2016

Transfer Learning Heuristically Accelerated Algorithm: A Case Study with Real Robots

Luiz A. Celiberto; Reinaldo A. C. Bianchi; Paulo E. Santos

Reinforcement Learning (RL) is a successful technique for learning the solutions of control problems from an agents interaction in its domain. However, RL is known to be inefficient for real-world applications. In this paper we propose to use a combination of case-based reasoning (CBR) and heuristically accelerated reinforcement learning methods aiming to speed up a Reinforcement Learning algorithm in a transfer learning problem. We show results of applying this method on a robot soccer domain, where the use of the proposed method led to a significant improvement in the learning rate.


Journal of Intelligent and Robotic Systems | 2018

Heuristically Accelerated Reinforcement Learning by Means of Case-Based Reasoning and Transfer Learning

Reinaldo A. C. Bianchi; Paulo E. Santos; Isaac J. Silva; Luiz A. Celiberto; Ramon López de Mántaras

Reinforcement Learning (RL) is a well-known technique for learning the solutions of control problems from the interactions of an agent in its domain. However, RL is known to be inefficient in problems of the real-world where the state space and the set of actions grow up fast. Recently, heuristics, case-based reasoning (CBR) and transfer learning have been used as tools to accelerate the RL process. This paper investigates a class of algorithms called Transfer Learning Heuristically Accelerated Reinforcement Learning (TLHARL) that uses CBR as heuristics within a transfer learning setting to accelerate RL. The main contributions of this work are the proposal of a new TLHARL algorithm based on the traditional RL algorithm Q(λ) and the application of TLHARL on two distinct real-robot domains: a robot soccer with small-scale robots and the humanoid-robot stability learning. Experimental results show that our proposed method led to a significant improvement of the learning rate in both domains.


Archive | 2016

Evaluating the Performance of Two Computer Vision Techniques for a Mobile Humanoid Agent Acting at RoboCup KidSized Soccer League

Claudio O. Vilão; Vinicius N. Ferreira; Luiz A. Celiberto; Reinaldo A. C. Bianchi

A humanoid robot capable of playing soccer needs to identify several objects in the soccer field in order to play soccer. The robot has to be able to recognize the ball, teammates and opponents, inferring information such as their distance and estimated location. In order to achieve this key requisite, this paper analyzes two descriptor algorithms, HAAR and HOG, so that one of them can be used for recognizing humanoid robots with less false positives alarms and with best frame per second rate. They were used with their respective classical classifiers, AdaBoost and SVM. As many different robots are available in RoboCup domain, the descriptor needs to describe features in a way that they can be distinguished from the background at the same time the classification has to have a good generalization capability. Although some limitations appeared in tests, the results were beyond expectations. Given the results, the chosen descriptor should be able to identify a mainly white-ball, which is clearly a simpler object. The results for ball detection were also quite interesting.


latin american robotics symposium | 2015

Evaluating the Performance of Two Visual Descriptors Techniques for a Humanoid Robot

Claudio O. Vilão; Luiz A. Celiberto; Reinaldo A. C. Bianchi

A humanoid robot capable of playing soccer needs to know where opponents and team mates are in the soccer field. The robot has to be able to recognize team mates and opponents, inferring information such as distance and estimated location of the other robots. In order to achieve this key requisite, this paper analyze two descriptor algorithms, HAAR and HOG, so that one of them can be used for recognizing humanoid robots with less false positives alarms and with best frame per second rate. They were used with their respective classical classifiers, AdaBoost and SVM. As many different robots are available in RoboCup domain, the descriptor needs to describe features in a way that they can be distinguished from the background at the same time the classification has to have a good generalization capability. Although some limitations appeared in tests, the results were beyond expectations.


machine learning and data mining in pattern recognition | 2011

Investigation in transfer learning: better way to apply transfer learning between agents

Luiz A. Celiberto; Jackson P. Matsuura

This paper propose to investigate a better way to apply Transfer Learning (TL) between agents to speed up the Q-learning Reinforcement Learning algorithm and combines Case-Based Reasoning (CBR) and Heuristically Accelerated Reinforcement Learning (HARL) techniques. The experiments were made comparing differents approaches of Transfer Learning were actions learned in the acrobot problem can be used to speed up the learning of the policies of stability for Robocup 3D. The results confirm that the same Transfer Learning information can show differents results, depending how is applied.


2018 Simposio Brasileiro de Sistemas Eletricos (SBSE) | 2018

Control strategy for reducing energy consumption in a two wheel self-balancing vehicle

Ageu Alves dos Santos; Luiz Alberto Luz de Almeida; Felipe Sadami; Luiz A. Celiberto

Collaboration


Dive into the Luiz A. Celiberto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jackson P. Matsuura

Instituto Tecnológico de Aeronáutica

View shared research outputs
Top Co-Authors

Avatar

Paulo E. Santos

Centro Universitário da FEI

View shared research outputs
Top Co-Authors

Avatar

Ramon López de Mántaras

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos H. C. Ribeiro

Instituto Tecnológico de Aeronáutica

View shared research outputs
Top Co-Authors

Avatar

Felipe Sadami

Universidade Federal do ABC

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge