Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexander Paprotny.
Archive | 2013
Alexander Paprotny; Michael Thess
We introduce the concept of a sparse grid and show how this powerful approach to function space discretization may be employed to tackle high-dimensional machine learning problems of regression and classification. In particular, we address the issue of incremental computation of sparse grid regression coefficients so as to meet the requirements of realtime data mining. Conclusively, we present experimental results on real-world data sets.
Archive | 2013
Alexander Paprotny; Michael Thess
We introduce SVD/PCA-based matrix factorization frameworks and present applications to prediction-based recommendation. Furthermore, we devise incremental algorithms that enable to compute the considered factorizations adaptively in a realtime setting. Besides SVD and PCA-based frameworks, we discuss more sophisticated approaches like non-negative matrix factorization and Lanczos-based methods and assess their effectiveness by means of experiments on real-world data. Moreover, we address a compressive sensing-based approach to Netflix-like matrix completion problems and conclude the chapter by proposing a remedy to complexity issues in computing large elements of the low-rank matrices, which, as we shall see, is a recurring problem related to factorization-based prediction methods.
Archive | 2013
Alexander Paprotny; Michael Thess
We give a short introduction to reinforcement learning. This includes basic concepts like Markov decision processes, policies, state-value and action-value functions, and the Bellman equation. We discuss solution methods like policy and value iteration methods, online methods like temporal-difference learning, and state fundamental convergence results.
Archive | 2013
Alexander Paprotny; Michael Thess
We first discuss the requirements of a modern data mining system and show that the approach presented in this book fulfills most of them. However, the full realization of this approach is often thwarted by principal problems in the development of the required mathematical instruments. Especially, most of the computational methods developed by mathematicians over the last centuries are designed for engineering problems. We stress the differences to the requirements for data analysis problems and encourage the development of appropriate frameworks. Especially, control theory should play an important role here.
Archive | 2013
Alexander Paprotny; Michael Thess
We explore the subject of uniting the control-theoretic with the factorization-based approach to recommendation, arguing that tensor factorization may be employed to vanquish combinatorial complexity impediments related to more sophisticated MDP models that take a history of previous states rather than one single state into account. Specifically, we introduce a tensor representation of transition probabilities of Markov-k-processes and devise a Tucker-based approximation architecture that relies crucially on the notion of an aggregation basis described in Chap. 6. As our method requires a partitioning of the set of state transition histories, we are left with the challenge of how to determine a suitable partitioning, for which we propose a genetic algorithm.
Archive | 2013
Alexander Paprotny; Michael Thess
We describe the application of reinforcement learning to recommendation engines. At this, we introduce RE-specific empirical assumptions to reduce the complexity of RL in order to make it applicable to real-live recommendation problems. Especially, we provide a new approach for estimating transition probabilities of multiple recommendations based on that of single recommendations. The estimation of transition probabilities for single recommendations is left as an open problem that is covered in Chap. 5. Finally, we introduce a simple framework for testing online recommendations.
Archive | 2013
Alexander Paprotny; Michael Thess
We consider generalizations of the previously described SVD-based factorization methods to a tensor framework and discuss applications to recommendation. In particular, we generalize the previously introduced incremental SVD algorithm to higher dimensions. Furthermore, we briefly address other tensor factorization frameworks like CANDECOMP/PARAFAC as well as hierarchical SVD and Tensor-Train-Decomposition.
Archive | 2013
Alexander Paprotny; Michael Thess
We address the question of how hierarchical, or multigrid, methods may figure in dynamic programming and reinforcement learning for recommendation engines.
Archive | 2013
Alexander Paprotny; Michael Thess
This chapter is mainly devoted to the question of estimating transition probabilities taking into account the effect of recommendations. It turned out that this is an extremely complex problem. The central result is a simple empirical assumption that allows reducing the complexity of the estimation in a way that is computationally suitable to most practical problems. The discussion of this approach gives a deeper insight into essential principles of realtime recommendation engines. Based on this assumption, we propose methods to estimate the transition probabilities and provide some first experimental results. Although the results look promising, more advanced techniques are highly desirable. Such techniques like hierarchical and factorization methods are presented in the following chapters.
Archive | 2013
Alexander Paprotny; Michael Thess
The robust measurement of the efficiency of recommendation algorithms is an extremely important factor in the development of recommendation engines. We provide some useful methodical remarks on this topic in this chapter, even though it is not directly connected to the problem of adaptive learning. We further propose a straightforward algorithm to calculate confidence intervals for REs. At the end, we discuss Simpson’s paradox which illustrates the importance of constant environment conditions for testing.