Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dimitri Ognibene is active.

Publication


Featured researches published by Dimitri Ognibene.


Cognitive Neuroscience | 2015

Active inference and epistemic value.

K. J. Friston; Francesco Rigoli; Dimitri Ognibene; Christoph Mathys; Thomas H. B. FitzGerald; Giovanni Pezzulo

We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.


IEEE Transactions on Autonomous Mental Development | 2013

The Coordinating Role of Language in Real-Time Multimodal Learning of Cooperative Tasks

Maxime Petit; Stéphane Lallée; Jean-David Boucher; Grégoire Pointeau; Pierrick Cheminade; Dimitri Ognibene; Eris Chinellato; Ugo Pattacini; Ilaria Gori; Uriel Martinez-Hernandez; Hector Barron-Gonzalez; Martin Inderbitzin; Andre L. Luvizotto; Vicky Vouloutsi; Yiannis Demiris; Giorgio Metta; Peter Ford Dominey

One of the defining characteristics of human cognition is our outstanding capacity to cooperate. A central requirement for cooperation is the ability to establish a “shared plan”—which defines the interlaced actions of the two cooperating agents—in real time, and even to negotiate this shared plan during its execution. In the current research we identify the requirements for cooperation, extending our earlier work in this area. These requirements include the ability to negotiate a shared plan using spoken language, to learn new component actions within that plan, based on visual observation and kinesthetic demonstration, and finally to coordinate all of these functions in real time. We present a cognitive system that implements these requirements, and demonstrate the systems ability to allow a Nao humanoid robot to learn a nontrivial cooperative task in real-time. We further provide a concrete demonstration of how the real-time learning capability can be easily deployed on a different platform, in this case the iCub humanoid. The results are considered in the context of how the development of language in the human infant provides a powerful lever in the development of cooperative plans from lower-level sensorimotor capabilities.


IEEE Transactions on Autonomous Mental Development | 2015

Ecological Active Vision: Four Bioinspired Principles to Integrate Bottom–Up and Adaptive Top–Down Attention Tested With a Simple Camera-Arm Robot

Dimitri Ognibene; Gianluca Baldassare

Vision gives primates a wealth of information useful to manipulate the environment, but at the same time it can easily overwhelm their computational resources. Active vision is a key solution found by nature to solve this problem: a limited fovea actively displaced in space to collect only relevant information. Here we highlight that in ecological conditions this solution encounters four problems: 1) the agent needs to learn where to look based on its goals; 2) manipulation causes learning feedback in areas of space possibly outside the attention focus; 3) good visual actions are needed to guide manipulation actions, but only these can generate learning feedback; and 4) a limited fovea causes aliasing problems. We then propose a computational architecture (“BITPIC”) to overcome the four problems, integrating four bioinspired key ingredients: 1) reinforcement-learning fovea-based top-down attention; 2) a strong vision-manipulation coupling; 3) bottom-up periphery-based attention; and 4) a novel action-oriented memory. The system is tested with a simple simulated camera-arm robot solving a class of search-and-reach tasks involving color-blob “objects.” The results show that the architecture solves the problems, and hence the tasks, very efficiently, and highlight how the architecture principles can contribute to a full exploitation of the advantages of active vision in ecological conditions.


simulation of adaptive behavior | 2008

Integrating Epistemic Action (Active Vision) and Pragmatic Action (Reaching): A Neural Architecture for Camera-Arm Robots

Dimitri Ognibene; Christian Balkenius; Gianluca Baldassarre

The active vision and attention-for-action frameworks propose that in organisms attention and perception are closely integrated with action and learning. This work proposes a novel bio-inspired integrated neural-network architecture that on one side uses attention to guide and furnish the parameters to action, and on the other side uses the effects of action to train the task-oriented top-down attention components of the system. The architecture is tested both with a simulated and a real camera-arm robot engaged in a reaching task. The results highlight the computational opportunities and difficulties deriving from a close integration of attention, action and learning.


international conference on development and learning | 2007

Learning to select targets within targets in reaching tasks

Oliver Herbort; Dimitri Ognibene; Martin V. Butz; Gianluca Baldassarre

We present a developmental neural network model of motor learning and control, called RL_SURE_REACH. In a childhood phase, a motor controller for goal directed reaching movements with a redundant arm develops unsupervised. In subsequent task-specific learning phases, the neural network acquires goal-modulation skills. These skills enable RL_SURE-REACH to master a task that was used in a psychological experiment by Trommershauser, Maloney, and Landy (2003). This task required participants to select aimpoints within targets that maximize the likelihood of hitting a rewarded target and minimizes the likelihood of accidentally hitting an adjacent penalty area. The neural network acquires the necessary skills by means of a reinforcement learning based modulation of the mapping from visual representations to the target representation of the motor controller. This mechanism enables the model to closely replicate the data from the original experiment. In conclusion, the effectiveness of learned actions can be significantly enhanced by fine-tuning action selection based on the combination of information about the statistical properties of the motor system with different environmental payoff scenarios.


Bioinspiration & Biomimetics | 2013

Contextual action recognition and target localization with an active allocation of attention on a humanoid robot.

Dimitri Ognibene; Eris Chinellato; Miguel Sarabia; Yiannis Demiris

Exploratory gaze movements are fundamental for gathering the most relevant information regarding the partner during social interactions. Inspired by the cognitive mechanisms underlying human social behaviour, we have designed and implemented a system for a dynamic attention allocation which is able to actively control gaze movements during a visual action recognition task exploiting its own action execution predictions. Our humanoid robot is able, during the observation of a partners reaching movement, to contextually estimate the goal position of the partners hand and the location in space of the candidate targets. This is done while actively gazing around the environment, with the purpose of optimizing the gathering of information relevant for the task. Experimental results on a simulated environment show that active gaze control, based on the internal simulation of actions, provides a relevant advantage with respect to other action perception approaches, both in terms of estimation precision and of time required to recognize an action. Moreover, our model reproduces and extends some experimental results on human attention during an action perception.


international conference on development and learning | 2010

How can bottom-up information shape learning of top-down attention-control skills?

Dimitri Ognibene; Giovanni Pezzulo; Gianluca Baldassarre

How does bottom-up information affect the development of top-down attentional control skills during the learning of visuomotor tasks? Why is the eye fovea so small? Strong evidence supports the idea that in humans foveation is mainly guided by task-specific skills, but how these are learned is still an important open problem. We designed and implemented a simulated neural eye-arm coordination model to study the development of attention control in a search-and-reach task involving simple coloured stimuli. The model is endowed with a hard-wired bottom-up attention saliency map and a top-down attention component which acquires task-specific knowledge on potential gaze targets and their spatial relations. This architecture achieves high performance very fast. To explain this result, we argue that: (a) the interaction between bottom-up and top-down mechanisms supports the development of task-specific attention control skills by allowing an efficient exploration of potentially useful gaze targets; (b) bottom-up mechanisms boast the exploitation of the initial limited task-specific knowledge by actively selecting areas where it can be suitably applied; (c) bottom-up processes shape objects representation, their value, and their roles (these can change during learning, e.g. distractors can become useful attentional cues); (d) increasing the size of the fovea alleviates perceptual aliasing, but at the same time increases input processing costs and the number of trials required to learn. Overall, the results indicate that bottom-up attention mechanisms can play a relevant role in attention control, especially during the acquisition of new task-specific skills, but also during task performance.


Frontiers in Neurorobotics | 2010

Reading as active sensing: a computational model of gaze planning during word recognition

Marcello Ferro; Dimitri Ognibene; Giovanni Pezzulo; Vito Pirrelli

We offer a computational model of gaze planning during reading that consists of two main components: a lexical representation network, acquiring lexical representations from input texts (a subset of the Italian CHILDES database), and a gaze planner, designed to recognize written words by mapping strings of characters onto lexical representations. The model implements an active sensing strategy that selects which characters of the input string are to be fixated, depending on the predictions dynamically made by the lexical representation network. We analyze the developmental trajectory of the system in performing the word recognition task as a function of both increasing lexical competence, and correspondingly increasing lexical prediction ability. We conclude by discussing how our approach can be scaled up in the context of an active sensing strategy applied to a robotic setting.


ieee international conference on cyber technology in automation control and intelligent systems | 2012

The Human-Robot Cloud: Situated collective intelligence on demand

Nikolaos Mavridis; Thirimachos Bourlai; Dimitri Ognibene

The Human-Robot Cloud (HRC) is an innovative extension of Cloud Computing across two important directions: First, while traditional cloud computing enables transparent utilization of distributed computational as well as storage resources, the HRC enables, in addition to the above two, the utilization of (a) distributed sensing (sensor network technology) and (b) actuator networks (including robot networks). Thus, HRC extends the concept of cloud computing by connecting it to the “Physical World”, through sensing and action. Second, while traditional cloud computing involves the usage of only electronic components, such as computers and storage devices, the HRCs capability is extended by the support of human physical and cognitive “components” as part of the cloud, which are neither expected to be experts nor to be engaged with the cloud full-time. Such components are primarily expected to interact with the system for only short periods of time (seconds), essentially providing crowd-servicing for the Cloud. Human components provide any or a mixture of the following: a) input arising from a number of sources through the usage of their sensory faculties (auditory, visual etc.), - thus, acting as “intelligent sensors” attached to the cloud; b) input that results from the usage of their cognitive faculties (pattern recognition, prediction, identification, planning etc.) - thus, acting as “intelligent systems” attached to the cloud; and c) actuation services to the Cloud (by moving around their bodies or other objects) - thus acting as “actuators” attached to the cloud. Thus, the proposed HRC is aiming to achieve the best of both worlds, i.e., either humans or machines, being able to carry out tasks which are very difficult or impossible for either humans or machines alone to carry out. Furthermore, the HRC enables the construction of situated agents exhibiting collective intelligence on demand, and the transformation of situated agency from a “capital investment” to a service, components of which can be provided by multiple providers, in a transparent fashion to the end user.


computational intelligence for modelling, control and automation | 2005

Fuzzy-based Schema Mechanisms in AKIRA

Giovanni Pezzulo; Dimitri Ognibene; Gianguglielmo Calvi; Daniela Lalia

We compare action selection and schema mechanisms for robotic control, focusing mainly on the reactive vs. anticipatory distinction. We present AKIRA, an agent-based hybrid architecture, focusing on its capabilites to design fuzzy-based schema models. We implement in AKIRA reactive and anticipatory mechanisms, and we compare them in an experimental set-up in the visual search domain

Collaboration


Dive into the Dimitri Ognibene's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jürgen Schmidhuber

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge