Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Onder Tutsoy is active.

Publication


Featured researches published by Onder Tutsoy.


Transactions of the Institute of Measurement and Control | 2016

Reinforcement learning analysis for a minimum time balance problem

Onder Tutsoy; Martin Brown

Reinforcement learning was developed to solve complex learning control problems, where only a minimal amount of a priori knowledge exists about the system dynamics. It has also been used as a model of cognitive learning in humans and applied to systems, such as pole balancing and humanoid robots, to study embodied cognition. However, closed-form analysis of the value function learning based on a higher-order unstable test problem dynamics has been rarely considered. In this paper, firstly, a second-order, unstable balance test problem is used to investigate issues associated with the value function parameter convergence and rate of convergence. In particular, the convergence of the minimum time value function is analysed, where the minimum time optimal control policy is assumed known. It is shown that the temporal difference error introduces a null space associated with the experiment termination basis function during the simulation. As this effect occurs due to termination or any kind of switching in control signal, this null space appears in temporal differences (TD) error for more general higher-order systems. Secondly, the rate of parameter convergence is analysed and it is shown that residual gradient algorithm converges faster than TD(0) for this particular test problem. Thirdly, impact of the finite horizon on both the value function and control policy learning has been analysed in case of unknown control policy and added random exploration noise.


Journal of Experimental and Theoretical Artificial Intelligence | 2016

An analysis of value function learning with piecewise linear control

Onder Tutsoy; Martin Brown

Reinforcement learning (RL) algorithms attempt to learn optimal control actions by iteratively estimating a long-term measure of system performance, the so-called value function. For example, RL algorithms have been applied to walking robots to examine the connection between robot motion and the brain, which is known as embodied cognition. In this paper, RL algorithms are analysed using an exemplar test problem. A closed form solution for the value function is calculated and this is represented in terms of a set of basis functions and parameters, which is used to investigate parameter convergence. The value function expression is shown to have a polynomial form where the polynomial terms depend on the plants parameters and the value functions discount factor. It is shown that the temporal difference error introduces a null space for the differenced higher order basis associated with the effects of controller switching (saturated to linear control or terminating an experiment) apart from the time of the switch. This leads to slow convergence in the relevant subspace. It is also shown that badly conditioned learning problems can occur, and this is a function of the value function discount factor and the controller switching points. Finally, a comparison is performed between the residual gradient and TD(0) learning algorithms, and it is shown that the former has a faster rate of convergence for this test problem.


Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering | 2015

Adaptive estimator design for unstable output error systems: A test problem and traditional system identification based analysis

Onder Tutsoy; Sule Colak

A key open question in adaptive estimator design is how to assure that the parameters of the proposed algorithms are converging to their almost correct solutions; hence, the learning algorithm is unbiased. Moreover, determining the speed of parameter convergence is important as it provides insight about the performance of the learning algorithms. The main contributions of the article are fourfold: the first one is that the article, initially, introduces an adaptive estimator to learn the discounted Q-function and approximate optimal control policy without requiring linear, discrete time, unstable output error system dynamics, but using only the noisy system measurements. The simulation results show that the adaptive estimator minimizes the stochastic cost function and temporal difference error and also learns the approximate Q-function together with the control policy. The second one is consideration of a different approach by taking a simple test problem to investigate issues associated with the Q-function’s representation and parametric convergence. In particular, the terminal convergence problem is analyzed with a known optimal control policy where the aim is to accurately learn only the Q-function. It is parameterized by terms which are functions of the unknown plant’s parameters and the Q-function’s discount factor, and their convergence properties are analyzed and compared with the adaptive estimator. The third one is to show that even though the adaptive estimator with a large Q-function discount factor yields larger control feedback gains, so that faster state converges upright, the learning problem is badly conditioned; hence, the parameter convergence is sluggish, as the Q-function discount factor approaches the inverse of the dominant pole of the unstable system. Finally, the fourth one is comparison of the state output learned by the adaptive estimator with the ones obtained from traditional system identification algorithms. Simulation result for a higher order unstable output error system shows that the adaptive estimator closely follows the real system output whereas the system identification algorithms do not.


world congress on intelligent control and automation | 2012

An exemplar test problem on parameter convergence analysis of temporal difference algorithms

Martin Brown; Onder Tutsoy

Reinforcement learning techniques have been developed to solve difficult learning control problems having small amount of a priori knowledge about the system dynamics. In this paper, a simple unstable exemplar test problem is proposed to investigate issues in parametric convergence of the value function. A specific closed-form solution for the value function is determined which has a polynomial form. It is proved that the temporal difference error introduces a null space associated with the finite horizon basis function during the control trajectory. The learning problem can be only nonsingular if the termination is handled correctly, and a number of possible solutions are introduced. This result was only revealed because of the derived closed form solution for the value function.


Isa Transactions | 2018

Design of a completely model free adaptive control in the presence of parametric, non-parametric uncertainties and random control signal delay

Onder Tutsoy; Duygun Erol Barkana; Harun Tugal

In this paper, an adaptive controller is developed for discrete time linear systems that takes into account parametric uncertainty, internal-external non-parametric random uncertainties, and time varying control signal delay. Additionally, the proposed adaptive control is designed in such a way that it is utterly model free. Even though these properties are studied separately in the literature, they are not taken into account all together in adaptive control literature. The Q-function is used to estimate long-term performance of the proposed adaptive controller. Control policy is generated based on the long-term predicted value, and this policy searches an optimal stabilizing control signal for uncertain and unstable systems. The derived control law does not require an initial stabilizing control assumption as in the ones in the recent literature. Learning error, control signal convergence, minimized Q-function, and instantaneous reward are analyzed to demonstrate the stability and effectiveness of the proposed adaptive controller in a simulation environment. Finally, key insights on parameters convergence of the learning and control signals are provided.


Transactions of the Institute of Measurement and Control | 2017

Learning to balance an NAO robot using reinforcement learning with symbolic inverse kinematic

Onder Tutsoy; Duygun Erol Barkana; Sule Colak

An autonomous humanoid robot (HR) with learning and control algorithms is able to balance itself during sitting down, standing up, walking and running operations, as humans do. In this study, reinforcement learning (RL) with a complete symbolic inverse kinematic (IK) solution is developed to balance the full lower body of a three-dimensional (3D) NAO HR which has 12 degrees of freedom. The IK solution converts the lower body trajectories, which are learned by RL, into reference positions for the joints of the NAO robot. This reduces the dimensionality of the learning and control problems since the IK integrated with the RL eliminates the need to use whole HR states. The IK solution in 3D space takes into account not only the legs but also the full lower body; hence, it is possible to incorporate the effect of the foot and hip lengths on the IK solution. The accuracy and capability of following real joint states are evaluated in the simulation environment. MapleSim is used to model the full lower body, and the developed RL is combined with this model by utilizing Modelica and Maple software properties. The results of the simulation show that the value function is maximized, temporal difference error is reduced to zero, the lower body is stabilized at the upright, and the convergence speed of the RL is improved with use of the symbolic IK solution.


conference towards autonomous robotic systems | 2011

On the analysis of parameter convergence for temporal difference learning of an exemplar balance problem

Martin Brown; Onder Tutsoy

Bipedal walking/locomotion is a challenging control problem but also an interesting problem for studying learning algorithms. In 1981, Barto and Sutton developed a RL method based on TD which used the concept of learning from failure. Moreover, over the last few years the poor/slow convergence issues has gained more attention by researchers [1]. In this paper, a closed form value function solution for an unstable plant and optimal polynomial basis for the value function are presented. The linear TD(0) algorithm is stated and it is shown that the finite horizon effect which is due to repeatedly simulating the system over a finite horizon introduces a near singularity/bias in the parameter estimation process. A method is proposed to overcome this problem. Finally, the simulation results for the exemplar problem are presented, and the parameter convergence is analyzed.


Optimal Control Applications & Methods | 2016

Chaotic dynamics and convergence analysis of temporal difference algorithms with bang‐bang control

Onder Tutsoy; Martin Brown


Asian Journal of Control | 2016

Design and Comparison Base Analysis of Adaptive Estimator for Completely Unknown Linear Systems in the Presence of OE Noise and Constant Input Time Delay

Onder Tutsoy


international conference on control applications | 2012

Convergence analysis of temporal difference learning algorithms based on a test problem

Onder Tutsoy; Martin Brown; Hong Wang

Collaboration


Dive into the Onder Tutsoy's collaboration.

Top Co-Authors

Avatar

Martin Brown

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sule Colak

Adana Science and Technology University

View shared research outputs
Top Co-Authors

Avatar

Hong Wang

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Harun Tugal

Adana Science and Technology University

View shared research outputs
Researchain Logo
Decentralizing Knowledge