Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haitham Bou Ammar is active.

Publication


Featured researches published by Haitham Bou Ammar.


Pattern Recognition Letters | 2015

Factored four way conditional restricted Boltzmann machines for activity recognition

Decebal Constantin Mocanu; Haitham Bou Ammar; Dietwig Jos Clement Lowet; Kurt Driessens; Antonio Liotta; Gerhard Weiss; Karl Tuyls

This paper proposes a new learning algorithm for human activity recognition.Its name is factored four way conditional restricted Boltzmann machine (FFW-CRBM).FFW-CRBMs are capable of simultaneous regression and classification.FFW-CRBMs came together with their own training procedure.The training procedure name is sequential Markov chain contrastive divergence. This paper introduces a new learning algorithm for human activity recognition capable of simultaneous regression and classification. Building upon conditional restricted Boltzmann machines (CRBMs), Factored four way conditional restricted Boltzmann machines (FFW-CRBMs) incorporate a new label layer and four-way interactions among the neurons from the different layers. The additional layer gives the classification nodes a similar strong multiplicative effect compared to the other layers, and avoids that the classification neurons are overwhelmed by the (much larger set of) other neurons. This makes FFW-CRBMs capable of performing activity recognition, prediction and self auto evaluation of classification within one unified framework. As a second contribution, sequential Markov chain contrastive divergence (SMcCD) is introduced. SMcCD modifies Contrastive Divergence to compensate for the extra complexity of FFW-CRBMs during training. Two sets of experiments one on benchmark datasets and one a robotic platform for smart companions show the effectiveness of FFW-CRBMs.


integrated network management | 2015

Reduced reference image quality assessment via Boltzmann Machines

Decebal Constantin Mocanu; Georgios Exarchakos; Haitham Bou Ammar; Antonio Liotta

Monitoring and controlling the users perceived quality, in modern video services is a challenging proposition, mainly due to the limitations of current Image Quality Assessment (IQA) algorithms. Subjective Quality of Experience (QoE) is widely used to get a right impression, but unfortunately this can not be used in real world scenarios. In general, objective QoE algorithms represent a good substitution for the subjective ones, and they are split in three main directions: Full Reference (FR), Reduced Reference (RR), and No Reference (NR). From these three, the RR IQA approach offers a practical solution to assess the quality of an impaired image due to the fact that just a small amount of information is needed from the original image. At the same time, keeping in mind that we need automated QoE algorithms which are context independent, in this paper we introduce a novel stochastic RR IQA metric to assess the quality of an image based on Deep Learning, namely Restricted Boltzmann Machine Similarity Measure (RBMSim). RBMSim was evaluated on two benchmarked image databases with subjective studies, against objective IQA algorithms. The results show that its performance is comparable, or even better in some cases, with widely known FR IQA methods.


european conference on machine learning | 2013

Automatically Mapped Transfer between Reinforcement Learning Tasks via Three-Way Restricted Boltzmann Machines

Haitham Bou Ammar; Decebal Constantin Mocanu; Matthew E. Taylor; Kurt Driessens; Karl Tuyls; Gerhard Weiss

Existing reinforcement learning approaches are often hampered by learning tabula rasa. Transfer for reinforcement learning tackles this problem by enabling the reuse of previously learned results, but may require an inter-task mapping to encode how the previously learned task and the new task are related. This paper presents an autonomous framework for learning inter-task mappings based on an adaptation of restricted Boltzmann machines. Both a full model and a computationally efficient factored model are introduced and shown to be effective in multiple transfer learning scenarios.


adaptive and learning agents | 2011

Reinforcement learning transfer via common subspaces

Haitham Bou Ammar; Matthew E. Taylor

Agents in reinforcement learning tasks may learn slowly in large or complex tasks -- transfer learning is one technique to speed up learning by providing an informative prior. How to best enable transfer between tasks with different state representations and/or actions is currently an open question. This paper introduces the concept of a common task subspace, which is used to autonomously learn how two tasks are related. Experiments in two different nonlinear domains empirically show that a learned inter-state mapping can successfully be used by fitted value iteration, to (1) improving the performance of a policy learned with a fixed number of samples, and (2) reducing the time required to converge to a (near-) optimal policy with unlimited samples.


Artificial Life | 2014

Effects of Evolution on the Emergence of Scale Free Networks

Bijan Ranjbar-Sahraei; Daan Bloembergen; Haitham Bou Ammar; Karl Tuyls; Gerhard Weiss

The evolution of cooperation in social networks, and the emergence of these networks using simple rules of attachment, have both been studied extensively although mostly in separation. In real-world scenarios, however, these two fields are typically intertwined, where individuals’ behavior affect the structural emergence of the network and vice versa. Although much progress has been made in understanding each of the aforementioned fields, many joint characteristics are still unrevealed. In this paper we propose the Simultaneous Emergence and Evolution (SEE) model, aiming at unifying the study of these two fields. The SEE model combines the continuous action prisoner’s dilemma (modeling the evolution of cooperation) with preferential attachment (used to model network emergence), enabling the simultaneous study of both structural emergence and behavioral evolution of social networks. A set of empirical experiments show that the SEE model is capable of generating realistic complex networks, while at the same time allowing for the study of the impact of initial conditions on the evolution of cooperation.


multiagent system technologies | 2012

Evolutionary dynamics of ant colony optimization

Haitham Bou Ammar; Karl Tuyls; Michael Kaisers

Swarm intelligence has been successfully applied in various domains, e.g., path planning, resource allocation and data mining. Despite its wide use, a theoretical framework in which the behavior of swarm intelligence can be formally understood is still lacking. This article starts by formally deriving the evolutionary dynamics of ant colony optimization, an important swarm intelligence algorithm. We then continue to formally link these to reinforcement learning. Specifically, we show that the attained evolutionary dynamics are equivalent to the dynamics of Q-learning. Both algorithms are equivalent to a dynamical system known as the replicator dynamics in the domain of evolutionary game theory. In conclusion, the process of improvement described by the replicator dynamics appears to be a fundamental principle which drives processes in swarm intelligence, evolution, and learning.


european workshop on multi-agent systems | 2011

Reinforcement learning transfer using a sparse coded inter-task mapping

Haitham Bou Ammar; Matthew E. Taylor; Karl Tuyls; Gerhard Weiss

Reinforcement learning agents can successfully learn in a variety of difficult tasks. A fundamental problem is that they may learn slowly in complex environments, inspiring the development of speedup methods such as transfer learning. Transfer improves learning by reusing learned behaviors in similar tasks, usually via an inter-task mapping, which defines how a pair of tasks are related. This paper proposes a novel transfer learning technique to autonomously construct an inter-task mapping by using a novel combinations of sparse coding, sparse projection learning, and sparse pseudo-input gaussian processes. Experiments show successful transfer of information between two very different domains: the mountain car and the pole swing-up task. This paper empirically shows that the learned inter-task mapping can be used to successfully (1) improve the performance of a learned policy on a fixed number of samples, (2) reduce the learning times needed by the algorithms to converge to a policy on a fixed number of samples, and (3) converge faster to a near-optimal policy given a large amount of samples.


international conference on mechatronics | 2013

Swarm-based evaluation of nonparametric SysML mechatronics system design

Mohammad Chami; Haitham Bou Ammar; Holger Voos; Karl Tuyls; Gerhard Weiss

The design of a mechatronics system is considered one of the hardest challenges in industry. This is mainly due to the multidisciplinary nature of the design process that requires the knowledge integration of the participating disciplines. Previously, we have proposed SysDICE a framework that is capable of: (1) modeling the multidisciplinary information of mechatronics systems using SysML and (2) adopting a nonparametric technique for evaluating such a SysML model. In SysDICE the optimization that led to the determination of the best alternative combinations for satisfying the requirements was time-costly and discarded prohibited combinations. This paper contributes by: (1) proposing an effective method for restricting the set of possible alternative combinations and (2) employing a swarm intelligence based optimization scheme which significantly reduces the computational cost of SysDICE.


Pattern Recognition | 2017

Scalable lifelong reinforcement learning

Yusen Zhan; Haitham Bou Ammar; Matthew E. Taylor

Abstract Lifelong reinforcement learning provides a successful framework for agents to learn multiple consecutive tasks sequentially. Current methods, however, suffer from scalability issues when the agent has to solve a large number of tasks. In this paper, we remedy the above drawbacks and propose a novel scalable technique for lifelong reinforcement learning. We derive an algorithm which assumes the availability of multiple processing units and computes shared repositories and local policies using only local information exchange. We then show an improvement to reach a linear convergence rate compared to current lifelong policy search methods. Finally, we evaluate our technique on a set of benchmark dynamical systems and demonstrate learning speed-ups and reduced running times.


Neural Computation | 2017

Nonconvex Policy Search Using Variational Inequalities

Yusen Zhan; Haitham Bou Ammar; Matthew E. Taylor

Policy search is a class of reinforcement learning algorithms for finding optimal policies in control problems with limited feedback. These methods have been shown to be successful in high-dimensional problems such as robotics control. Though successful, current methods can lead to unsafe policy parameters that potentially could damage hardware units. Motivated by such constraints, we propose projection-based methods for safe policies. These methods, however, can handle only convex policy constraints. In this letter, we propose the first safe policy search reinforcement learner capable of operating under nonconvex policy constraints. This is achieved by observing, for the first time, a connection between nonconvex variational inequalities and policy search problems. We provide two algorithms, Mann and two-step iteration, to solve the above problems and prove convergence in the nonconvex stochastic setting. Finally, we demonstrate the performance of the algorithms on six benchmark dynamical systems and show that our new method is capable of outperforming previous methods under a variety of settings.

Collaboration


Dive into the Haitham Bou Ammar's collaboration.

Top Co-Authors

Avatar

Karl Tuyls

University of Liverpool

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew E. Taylor

Washington State University

View shared research outputs
Top Co-Authors

Avatar

Eric Eaton

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Siqi Chen

Maastricht University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rasul Tutunov

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Decebal Constantin Mocanu

Eindhoven University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge