Nikolaos Tziortziotis
University of Ioannina
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nikolaos Tziortziotis.
european workshop on reinforcement learning | 2011
Nikolaos Tziortziotis; Konstantinos Blekas
In this study we present a sparse Bayesian framework for value function approximation. The proposed method is based on the on-line construction of a dictionary of states which are collected during the exploration of the environment by the agent. A linear regression model is established for the observed partial discounted return of such dictionary states, where we employ the Relevance Vector Machine (RVM) and exploit its enhanced modeling capability due to the embedded sparsity properties. In order to speed-up the optimization procedure and allow dealing with large-scale problems, an incremental strategy is adopted. A number of experiments have been conducted on both simulated and real environments, where we took promising results in comparison with another Bayesian approach that uses Gaussian processes.
international conference on tools with artificial intelligence | 2012
Nikolaos Tziortziotis; Konstantinos Blekas
A significant issue in representing reinforcement learning agents in Markov decision processes is how to design efficient feature spaces in order to estimate optimal policy. This particular study addresses this challenge by proposing a compact framework that employs an on-line clustering approach for constructing appropriate basis functions. Also, it performs a state-action trajectory analysis to gain valuable affinity information among clusters and estimate their transition dynamics. Value function approximation is used for policy evaluation in a least-squares temporal difference framework. The proposed method is evaluated in several simulated and real environments, where we took promising results.
hellenic conference on artificial intelligence | 2016
Konstantinos Tziortziotis; Nikolaos Tziortziotis; Kostas Vlachos; Konstantinos Blekas
This paper investigates the use of reinforcement learning for the navigation of an over-actuated marine platform in unknown environments. The proposed approach uses an online least-squared policy iteration scheme for value function approximation in order to estimate optimal policy. We evaluate our approach in a simulation platform and report some initial results concerning its performance on estimating optimal navigation policies to unknown environments under different environmental disturbances. The results are promising.
hellenic conference on artificial intelligence | 2014
Nikolaos Tziortziotis; Konstantinos Tziortziotis; Konstantinos Blekas
Reinforcement Learning (RL) algorithms have been promising methods for designing intelligent agents in games. Although their capability of learning in real time has been already proved, the high dimensionality of state spaces in most game domains can be seen as a significant barrier. This paper studies the popular arcade video game Ms. Pac-Man and outlines an approach to deal with its large dynamical environment. Our motivation is to demonstrate that an abstract but informative state space description plays a key role in the design of efficient RL agents. Thus, we can speed up the learning process without the necessity of Q-function approximation. Several experiments were made using the multiagent MASON platform where we measured the ability of the approach to reach optimum generic policies which enhances its generalization abilities.
european conference on machine learning | 2017
Nikolaos Tziortziotis; Christos Dimitrakakis
This paper proposes a fully Bayesian approach for Least-Squares Temporal Differences (LSTD), resulting in fully probabilistic inference of value functions that avoids the overfitting commonly experienced with classical LSTD when the number of features is larger than the number of samples. Sparse Bayesian learning provides an elegant solution through the introduction of a prior over value function parameters. This gives us the advantages of probabilistic predictions, a sparse model, and good generalisation capabilities, as irrelevant parameters are marginalised out. The algorithm efficiently approximates the posterior distribution through variational inference. We demonstrate the ability of the algorithm in avoiding overfitting experimentally.
electronic imaging | 2015
Katerina Pandremmenou; Nikolaos Tziortziotis; Seethal Paluri; W. Zhang; Konstantinos Blekas; Lisimachos P. Kondi; Sunil Kumar
We propose the use of the Least Absolute Shrinkage and Selection Operator (LASSO) regression method in order to predict the Cumulative Mean Squared Error (CMSE), incurred by the loss of individual slices in video transmission. We extract a number of quality-relevant features from the H.264/AVC video sequences, which are given as input to the LASSO. This method has the benefit of not only keeping a subset of the features that have the strongest effects towards video quality, but also produces accurate CMSE predictions. Particularly, we study the LASSO regression through two different architectures; the Global LASSO (G.LASSO) and Local LASSO (L.LASSO). In G.LASSO, a single regression model is trained for all slice types together, while in L.LASSO, motivated by the fact that the values for some features are closely dependent on the considered slice type, each slice type has its own regression model, in an e ort to improve LASSOs prediction capability. Based on the predicted CMSE values, we group the video slices into four priority classes. Additionally, we consider a video transmission scenario over a noisy channel, where Unequal Error Protection (UEP) is applied to all prioritized slices. The provided results demonstrate the efficiency of LASSO in estimating CMSE with high accuracy, using only a few features. les that typically contain high-entropy data, producing a footprint that is far less conspicuous than existing methods. The system uses a local web server to provide a le system, user interface and applications through an web architecture.
hellenic conference on artificial intelligence | 2012
Nikolaos Tziortziotis; Konstantinos Blekas
Value function approximation is a critical task in solving Markov decision processes and accurately modeling reinforcement learning agents. A significant issue is how to construct efficient feature spaces from samples collected by the environment in order to obtain an optimal policy. The particular study addresses this challenge by proposing an on-line kernel-based clustering approach for building appropriate basis functions during the learning process. The method uses a kernel function capable of handling pairs of state-action as sequentially generated by the agent. At each time step, the procedure either adds a new cluster, or adjusts the winning clusters parameters. By considering the value function as a linear combination of the constructed basis functions, the weights are optimized in a temporal-difference framework in order to minimize the Bellman approximation error. The proposed method is evaluated in numerous known simulated environments.
Journal of Machine Learning Research | 2014
Nikolaos Tziortziotis; Christos Dimitrakakis; Konstantinos Blekas
IEEE Transactions on Computational Intelligence and Ai in Games | 2016
Nikolaos Tziortziotis; Georgios Papagiannis; Konstantinos Blekas
international joint conference on artificial intelligence | 2013
Nikolaos Tziortziotis; Christos Dimitrakakis; Konstantinos Blekas