Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christos Dimitrakakis is active.

Publication


Featured researches published by Christos Dimitrakakis.


ad hoc networks | 2013

Intrusion detection in MANET using classification algorithms: The effects of cost and model selection

Aikaterini Mitrokotsa; Christos Dimitrakakis

Intrusion detection is frequently used as a second line of defense in Mobile Ad-hoc Networks (MANETs). In this paper we examine how to properly use classification methods in intrusion detection for MANETs. In order to do so we evaluate five supervised classification algorithms for intrusion detection on a number of metrics. We measure their performance on a dataset, described in this paper, which includes varied traffic conditions and mobility patterns for multiple attacks. One of our goals is to investigate how classification performance depends on the problem cost matrix. Consequently, we examine how the use of uniform versusweighted cost matrices affects classifier performance. A second goal is to examine techniques for tuning classifiers when unknown attack subtypes are expected during testing. Frequently, when classifiers are tuned using cross-validation, data from the same types of attacks are available in all folds. This differs from real-world employment where unknown types of attacks may be present. Consequently, we develop a sequential cross-validation procedure so that not all types of attacks will necessarily be present across all folds, in the hope that this would make the tuning of classifiers more robust. Our results indicate that weighted cost matrices can be used effectively with most statistical classifiers and that sequential cross-validation can have a small, but significant effect for certain types of classifiers.


PLOS Computational Biology | 2013

Network Self-Organization Explains the Statistics and Dynamics of Synaptic Connection Strengths in Cortex

Pengsheng Zheng; Christos Dimitrakakis; Jochen Triesch

The information processing abilities of neural circuits arise from their synaptic connection patterns. Understanding the laws governing these connectivity patterns is essential for understanding brain function. The overall distribution of synaptic strengths of local excitatory connections in cortex and hippocampus is long-tailed, exhibiting a small number of synaptic connections of very large efficacy. At the same time, new synaptic connections are constantly being created and individual synaptic connection strengths show substantial fluctuations across time. It remains unclear through what mechanisms these properties of neural circuits arise and how they contribute to learning and memory. In this study we show that fundamental characteristics of excitatory synaptic connections in cortex and hippocampus can be explained as a consequence of self-organization in a recurrent network combining spike-timing-dependent plasticity (STDP), structural plasticity and different forms of homeostatic plasticity. In the network, associative synaptic plasticity in the form of STDP induces a rich-get-richer dynamics among synapses, while homeostatic mechanisms induce competition. Under distinctly different initial conditions, the ensuing self-organization produces long-tailed synaptic strength distributions matching experimental findings. We show that this self-organization can take place with a purely additive STDP mechanism and that multiplicative weight dynamics emerge as a consequence of network interactions. The observed patterns of fluctuation of synaptic strengths, including elimination and generation of synaptic connections and long-term persistence of strong connections, are consistent with the dynamics of dendritic spines found in rat hippocampus. Beyond this, the model predicts an approximately power-law scaling of the lifetimes of newly established synaptic connection strengths during development. Our results suggest that the combined action of multiple forms of neuronal plasticity plays an essential role in the formation and maintenance of cortical circuits.


Machine Learning | 2008

Rollout sampling approximate policy iteration

Christos Dimitrakakis; Michail G. Lagoudakis

Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a supervised learning problem. This paper proposes variants of an improved policy iteration scheme which addresses the core sampling problem in evaluating a policy through simulation as a multi-armed bandit machine. The resulting algorithm offers comparable performance to the previous algorithm achieved, however, with significantly less computational effort. An order of magnitude improvement is demonstrated experimentally in two standard reinforcement learning domains: inverted pendulum and mountain-car.


conference on recommender systems | 2013

Personalized news recommendation with context trees

Florent Garcin; Christos Dimitrakakis; Boi Faltings

The proliferation of online news creates a need for filtering interesting articles. Compared to other products, however, recommending news has specific challenges: news preferences are subject to trends, users do not want to see multiple articles with similar content, and frequently we have insufficient information to profile the reader. In this paper, we introduce a class of news recommendation systems based on context trees. They can provide high-quality news recommendations to anonymous visitors based on present browsing behaviour. Using an unbiased testing methodology, we show that they make accurate and novel recommendations, and that they are sufficiently flexible for the challenges of news recommendation.


european conference on machine learning | 2011

Preference elicitation and inverse reinforcement learning

Constantin A. Rothkopf; Christos Dimitrakakis

We state the problem of inverse reinforcement learning in terms of preference elicitation, resulting in a principled (Bayesian) statistical formulation. This generalises previous work on Bayesian inverse reinforcement learning and allows us to obtain a posterior distribution on the agents preferences, policy and optionally, the obtained reward sequence, from observations. We examine the relation of the resulting approach to other statistical methods for inverse reinforcement learning via analysis and experimental results. We show that preferences can be determined accurately, even if the observed agents policy is sub-optimal with respect to its own preferences. In that case, significantly improved policies with respect to the agents preferences are obtained, compared to both other methods and to the performance of the demonstrated policy.


european workshop on reinforcement learning | 2011

Bayesian multitask inverse reinforcement learning

Christos Dimitrakakis; Constantin A. Rothkopf

We generalise the problem of inverse reinforcement learning to multiple tasks, from multiple demonstrations. Each one may represent one expert trying to solve a different task, or as different experts trying to solve the same task. Our main contribution is to formalise the problem as statistical preference elicitation, via a number of structured priors, whose form captures our biases about the relatedness of different tasks or expert policies. In doing so, we introduce a prior on policy optimality, which is more natural to specify. We show that our framework allows us not only to learn to efficiently from multiple experts but to also effectively differentiate between the goals of each. Possible applications include analysing the intrinsic motivations of subjects in behavioural experiments and learning from multiple teachers.


international conference on acoustics, speech, and signal processing | 2004

Boosting HMMs with an application to speech recognition

Christos Dimitrakakis; Samy Bengio

Boosting is a general method for training an ensemble of classifiers with a view to improving performance relative to that of a single classifier. While the original AdaBoost algorithm has been defined for classification tasks, the current work examines its applicability to sequence learning problems, focusing on speech recognition. We apply boosting at the phoneme model level and recombine expert decisions using multi-stream techniques.


algorithmic learning theory | 2014

Robust and Private Bayesian Inference

Christos Dimitrakakis; Blaine Nelson; Aikaterini Mitrokotsa; Benjamin I. P. Rubinstein

We examine the robustness and privacy of Bayesian inference, under assumptions on the prior, and with no modifications to the Bayesian framework. First, we generalise the concept of differential privacy to arbitrary dataset distances, outcome spaces and distribution families. We then prove bounds on the robustness of the posterior, introduce a posterior sampling mechanism, show that it is differentially private and provide finite sample bounds for distinguishability-based privacy under a strong adversarial model. Finally, we give examples satisfying our assumptions.


Neurocomputing | 2005

Online adaptive policies for ensemble classifiers

Christos Dimitrakakis; Samy Bengio

Ensemble algorithms can improve the performance of a given learning algorithm through the combination of multiple base classifiers into an ensemble. In this paper, we attempt to train and combine the base classifiers using an adaptive policy. This policy is learnt through a Q-learning inspired technique. Its effectiveness for an essentially supervised task is demonstrated by experimental results on several UCI benchmark databases.


ACM Transactions on Intelligent Systems and Technology | 2017

DUCT: An Upper Confidence Bound Approach to Distributed Constraint Optimization Problems

Brammert Ottens; Christos Dimitrakakis; Boi Faltings

We propose a distributed upper confidence bound approach, DUCT, for solving distributed constraint optimization problems. We compare four variants of this approach with a baseline random sampling algorithm, as well as other complete and incomplete algorithms for DCOPs. Under general assumptions, we theoretically show that the solution found by DUCT after T steps is approximately T−1-close to the optimal. Experimentally, we show that DUCT matches the optimal solution found by the well-known DPOP and O-DPOP algorithms on moderate-size problems, while always requiring less agent communication. For larger problems, where DPOP fails, we show that DUCT produces significantly better solutions than local, incomplete algorithms. Overall, we believe that DUCT is a practical, scalable algorithm for complex DCOPs.

Collaboration


Dive into the Christos Dimitrakakis's collaboration.

Top Co-Authors

Avatar

Aikaterini Mitrokotsa

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Aristide C. Y. Tossou

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Samy Bengio

Idiap Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jochen Triesch

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Boi Faltings

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Goran Radanovic

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge