Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Armon Toubman is active.

Publication


Featured researches published by Armon Toubman.


industrial and engineering applications of artificial intelligence and expert systems | 2014

Dynamic Scripting with Team Coordination in Air Combat Simulation

Armon Toubman; Jan Joris Roessingh; Pieter Spronck; Aske Plaat; H. Jaap van den Herik

Traditionally, behavior of Computer Generated Forces CGFs is controlled through scripts. Building such scripts requires time and expertise, and becomes harder as the domain becomes richer and more life-like. These downsides can be reduced by automatically generating behavior for CGFs using machine learning techniques. This paper focuses on Dynamic Scripting DS, a technique tailored to generating agent behavior. DS searches for an optimal combination of rules from a rule base. Under the assumption that intra-team coordination leads to more effective learning, we propose an extension of DS, called DS+C, with explicit coordination. In a comparison with regular DS we find that the addition of team coordination results in earlier convergence to optimal behavior. In addition, we achieved a performance increase of 20% against an unpredictable opponent. With DS+C, behavior for CGFs can be generated that is more effective since the CGFs act on knowledge achieved by coordination and the behavior converges more efficiently than with regular DS.


systems, man and cybernetics | 2015

Rewarding Air Combat Behavior in Training Simulations

Armon Toubman; Jan Joris Roessingh; Pieter Spronck; Aske Plaat; H. Jaap van den Herik

Computer generated forces (CGFs) inhabiting air combat training simulations must show realistic and adaptive behavior to effectively perform their roles as allies and adversaries. In earlier work, behavior for these CGFs was successfully generated using reinforcement learning. However, due to missile hits being subject to chance (a.k.a. The probability of-kill), the CGFs have in certain cases been improperly rewarded and punished. We surmise that taking this probability of-kill into account in the reward function will improve performance. To remedy the false rewards and punishments, a new reward function is proposed that rewards agents based on the expected outcome of their actions. Tests show that the use of this function significantly increases the performance of the CGFs in various scenarios, compared to the previous reward function and a naïve baseline. Based on the results, the new reward function allows the CGFs to generate more intelligent behavior, which enables better training simulations.


systems, man and cybernetics | 2016

Modeling behavior of Computer Generated Forces with Machine Learning Techniques, the NATO Task Group approach

Armon Toubman; Jan Joris Roessingh; Joost van Oijen; Rikke Amilde Løvlid; Ming Hou; Christophe Meyer; Linus J. Luotsinen; Roel Rijken; J. R. Harris; Michal Turcanik

Commercial/Military-Off-The-Shelf (COTS/MOTS) Computer Generated Forces (CGF) packages are widely used in modeling and simulation for training purposes. Conventional CGF packages often include artificial intelligence (AI) interfaces, but lack behavior generation and other adaptive capabilities. We believe Machine Learning (ML) techniques can be beneficial to the behavior modeling process, yet such techniques seem to be underused and perhaps under-appreciated. This paper aims at bridging the gap between users in academia and the military/industry at a high level when it comes to ML and AI. We address specific requirements and desired capabilities for applying machine learning to CGF behavior modeling applications. The paper is based on the work of the NATO Research Task Group IST-121 RTG-060 Machine Learning Techniques for Autonomous Computer Generated Entities.


international conference on machine learning and applications | 2015

Transfer Learning of Air Combat Behavior

Armon Toubman; Jan Joris Roessingh; Pieter Spronck; Aske Plaat; H. Jaap van den Herik

Machine learning techniques can help to automatically generate behavior for computer generated forces inhabiting air combat training simulations. However, as the complexity of scenarios increases, so does the time to learn optimal behavior. Transfer learning has the potential to significantly shorten the learning time between domains that are sufficiently similar. In this paper, we transfer air combat agents with experience fighting in 2-versus-1 scenarios to various 2-versus-2 scenarios. The performance of the transferred agents is compared to that of agents that learn from scratch in the 2v2 scenarios. The experiments show that the experience gained in the 2v1 scenarios is very beneficial in the plain 2v2 scenarios, where further learning is minimal. In difficult 2v2 scenarios transfer also occurs, and further learning ensues. The results pave the way for fast generation of behavior rules for air combat agents for new, complex scenarios using existing behavior models.


Atlantis Ambient and Pervasive Intelligence | 2013

Adaptive Autonomy in Unmanned Ground Vehicles Using Trust Models

Armon Toubman; Peter-Paul van Maanen; Mark Hoogendoorn

Although autonomous systems are becoming more and more capable of performing tasks as good as humans can, there is still a huge amount of (especially) complex tasks which can much better be performed by humans. However, when making such task allocation decisions, it might show that in particular situations it is better to let a human perform the task, whereas in other situations an autonomous system might perform better. This could for instance depend upon the current state of the human, which might be measured by means of ambient devices, but also on experiences obtained in the past. In this chapter, a trust-based approach is developed which aims at judging the current situation and deciding upon the best allocation (to the human or autonomous system) of a certain task. Hereby, an experiment in the context of controlling a set of robots to dismantle bombs has been performed, with focus on multiple types of support. The results show that support by means of simply allocating the task to the most suitable party gives superior performance.


28th European Simulation and Modelling Conference - ESM'2014' | 2014

Centralized Versus Decentralized Team Coordination Using Dynamic Scripting

Armon Toubman; Jan Joris Roessingh; Pieter Spronck; Aske Plaat; H.J. van den Herik


european conference on artificial intelligence | 2016

Rapid Adaptation of Air Combat Behaviour

Armon Toubman; Jan Joris Roessingh; Pieter Spronck; Aske Plaat; H.J. van den Herik


Archive | 2014

Improving Air-to-Air Combat Behavior through Transparent Machine Learning

Armon Toubman; Jan Joris Roessingh; Pieter Spronck; Aske Plaat; H.J. van den Herik


systems, man and cybernetics | 2017

Machine learning techniques for autonomous agents in military simulations — Multum in parvo

Jan Joris Roessingh; Armon Toubman; Joost van Oijen; Gerald Poppinga; Rikke Amilde Løvlid; Ming Hou; Linus J. Luotsinen


Lecture Notes in Computer Science | 2015

Evolutionary Dynamic Scripting: Adaptation of Expert Rule Bases for Serious Games

Reinier Kop; Armon Toubman; Mark Hoogendoorn; Jan Joris Roessingh

Collaboration


Dive into the Armon Toubman's collaboration.

Top Co-Authors

Avatar

Jan Joris Roessingh

National Aerospace Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rikke Amilde Løvlid

Norwegian Defence Research Establishment

View shared research outputs
Top Co-Authors

Avatar

Ming Hou

Defence Research and Development Canada

View shared research outputs
Top Co-Authors

Avatar

Linus J. Luotsinen

Swedish Defence Research Agency

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge