Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anna Belardinelli is active.

Publication


Featured researches published by Anna Belardinelli.


Cognitive Computation | 2010

Where to Look Next? Combining Static and Dynamic Proto-objects in a TVA-based Model of Visual Attention

Marco Wischnewski; Anna Belardinelli; Werner X. Schneider; Jochen J. Steil

To decide “Where to look next ?” is a central function of the attention system of humans, animals and robots. Control of attention depends on three factors, that is, low-level static and dynamic visual features of the environment (bottom-up), medium-level visual features of proto-objects and the task (top-down). We present a novel integrated computational model that includes all these factors in a coherent architecture based on findings and constraints from the primate visual system. The model combines spatially inhomogeneous processing of static features, spatio-temporal motion features and task-dependent priority control in the form of the first computational implementation of saliency computation as specified by the “Theory of Visual Attention” (TVA, [7]). Importantly, static and dynamic processing streams are fused at the level of visual proto-objects, that is, ellipsoidal visual units that have the additional medium-level features of position, size, shape and orientation of the principal axis. Proto-objects serve as input to the TVA process that combines top-down and bottom-up information for computing attentional priorities so that relatively complex search tasks can be implemented. To this end, separately computed static and dynamic proto-objects are filtered and subsequently merged into one combined map of proto-objects. For each proto-object, attentional priorities in the form of attentional weights are computed according to TVA. The target of the next saccade is the center of gravity of the proto-object with the highest weight according to the task. We illustrate the approach by applying it to several real world image sequences and show that it is robust to parameter variations.


systems man and cybernetics | 2007

Bottom-Up Gaze Shifts and Fixations Learning by Imitation

Anna Belardinelli; Fiora Pirri; Andrea Carbone

The ability to follow the gaze of conspecifics is a critical component in the development of social behaviors, and many efforts have been directed to studying the earliest age at which it begins to develop in infants. Developmental and neurophysiological studies suggest that imitative learning takes place once gaze-following abilities are fully established and joint attention can support the shared behavior required by imitation. Accordingly, gaze-following acquisition should be precursory to most machine learning tasks, and imitation learning can be seen as the earliest modality for acquiring meaningful gaze shifts and for understanding the structural substrate of fixations. Indeed, if some early attentional process, based on a suitable combination of gaze shifts and fixations, could be learned by the robot, then several demonstration learning tasks would be dramatically simplified. In this paper, we describe a methodology for learning gaze shifts based on imitation of gaze following with a gaze machine, which we purposefully introduced to make the robot gaze imitation conspicuous. The machine allows the robot to share and imitate gaze shifts and fixations of a caregiver through a mutual vergence. This process is then suitably generalized by learning both the scene salient features toward which the gaze is directed and the way saccadic programming is attained. Salient features are modeled by a family of Gaussian mixtures. These together with learned transitions are generalized via hidden Markov models to account for humanlike gaze shifts allowing to discriminate salient locations


Attention in Cognitive Systems | 2009

Motion Saliency Maps from Spatiotemporal Filtering

Anna Belardinelli; Fiora Pirri; Andrea Carbone

For artificial systems acting and perceiving in a dynamic world a core ability is to focus on aspects of the environment that can be crucial for the task at hand. Perception in autonomous systems needs to be filtered by a biologically inspired selective ability, therefore attention in dynamic settings is becoming a key research issue. In this paper we present a model for motion salience map computation based on spatiotemporal filtering. We extract a measure of coherent motion energy and select by the center-surround mechanism relevant zones that accumulate most energy and therefore contrast with surroundings in a given time slot. The method was tested on synthetic and real video sequences, supporting biological plausibility.


Vision Research | 2015

Goal-oriented gaze strategies afforded by object interaction.

Anna Belardinelli; Oliver Herbort; Martin V. Butz

Task influence has long been known to play a major role in the way our eyes scan a scene. Yet most studies focus either on visual search or on sequences of active tasks in complex real world scenarios. Few studies have contrasted the distribution of eye fixations during viewing and grasping objects. Here we address how attention is deployed when different actions are planned on objects, in contrast to when the same objects are categorized. In this respect, we are particularly interested in the role every fixation plays in the unfolding dynamics of action control. We conducted an eye-tracking experiment in which participants were shown images of real-world objects. Subjects were either to assign the displayed objects to one of two classes (categorization task), to mimic lifting (lifting task), or to mimic opening the object (opening task). Results suggest that even on simplified, two dimensional displays the eyes reveal the participants intentions in an anticipatory fashion. For the active tasks, already the second saccade after stimulus onset was directed towards the central region between the two locations where the thumb and the rest of the fingers would be placed. An analysis of saliency at fixation locations showed that fixations in active tasks have higher correspondence with salient features than fixations in the passive task. We suggest that attention flexibly coordinates visual selection for information retrieval and motor planning, working as a gateway between three components, linking the task (action), the object (target), and the effector (hand) in an effective way.


Journal of Vision | 2016

It's in the eyes: Planning precise manual actions before execution

Anna Belardinelli; Madeleine Y. Stepper; Martin V. Butz

It is well-known that our eyes typically fixate those objects in a scene, with which interactions are about to unfold. During manual interactions, our eyes usually anticipate the next subgoal and thus serve top-down, goal-driven information extraction requirements, probably driven by a schema-based task representation. On the other hand, motor control research concerning object manipulations has extensively demonstrated how grasping choices are often influenced by deeper considerations about the final goal of manual interactions. Here we show that also these deeper considerations are reflected in early eye fixation behavior, significantly before the hand makes contact with the object. In this study, subjects were asked to either pretend to drink out of the presented object or to hand it over to the experimenter. The objects were presented upright or upside down, thus affording a thumb-up (prone) or a thumb-down (supine) grasp. Eye fixation data show a clear anticipatory preference for the region where the index finger is going to be placed. Indeed, fixations highly correlate with the final index finger position, thus subserving the planning of the actual manual action. Moreover, eye fixations reveal several orders of manual planning: Fixation distributions do not only depend on the object orientation but also on the interaction task. These results suggest a fully embodied, bidirectional sensorimotor coupling of eye-hand coordination: The eyes help in planning and determining the actual manual object interaction, considering where to grasp the presented object in the light of the orientation and type of the presented object and the actual manual task to be accomplished with the object.


Proceedings of the 2006 international symposium on Practical cognitive agents and robots | 2006

Robot task-driven attention

Anna Belardinelli; Fiora Pirri; Andrea Carbone

Visual attention is a crucial skill in human beings in that it allows optimal deployment of visual processing and memory resources. It turns out to be even more useful in search tasks, since to select salient zones we use top-down priors, depending on the observed scene, along with bottom-up criteria. In this paper we show how we constructed a robotic model of attention, inspired by studies on human attention and gaze shifting. Our model relies on a measure of salience related to the particular type of environment and to the given task. This measure is hierarchically structured and consists of both top-down components, learned from the tutor, and bottom-up components as perceived in the scene by the robot. Hence with such a general model the robot can perform its own scan-path inside a similar environment and report on its findings.


Cognitive Processing | 2006

A biologically plausible robot attention model, based on space and time

Anna Belardinelli; Fiora Pirri

In this work we describe a biological inspired approach to robot attention, developed on the basis of experiments aimed to map human visual search onto robot behaviour, allowing particularly for depth as a further feature in the attention model. By means of a purposely-designed machine we studied fixation zones elicited from scanning paths that were performed during a task driven wandering of subjects’ gaze over a cluttered scene. Hence, we defined preference criteria and a utility function accounting for the optimization of visual endeavours. This function would allow a robot to select meaningful spots without the need to process the whole scene.


Experimental Brain Research | 2016

Anticipatory eye fixations reveal tool knowledge for tool interaction.

Anna Belardinelli; Marissa Barabas; Marc Himmelbach; Martin V. Butz

Action-oriented eye-tracking studies have shown that eye fixations reveal much about current behavioral intentions. The eyes typically fixate those positions of a tool or an object where the fingers will be placed next, or those positions in a scene, where obstacles need to be avoided to successfully reach or transport a tool or object. Here, we asked to what extent eye fixations can also reveal active cognitive inference processes, which are expected to integrate bottom-up visual information with internal knowledge for planning suitable object interactions task-dependently. In accordance to the available literature, we expected that task-relevant knowledge will include sensorimotor, semantic, and mechanical aspects. To investigate if and in which way this internal knowledge influences eye fixation behavior while planning an object interaction, we presented pictures of familiar and unfamiliar tools and instructed participants to either pantomime ‘lifting’ or ‘using’ the respective tool. When confronted with unfamiliar tools, participants fixated the tool’s effector part closer and longer in comparison with familiar tools. This difference was particularly prominent during ‘using’ trials when compared with ‘lifting’ trials. We suggest that this difference indicates that the brain actively extracts mechanical information about the unknown tool in order to infer its appropriate usage. Moreover, the successive fixations over a trial indicate that a dynamic, task-oriented, active cognitive process unfolds, which integrates available tool knowledge with visually gathered information to plan and determine the currently intended tool interaction.


Ai Magazine | 2014

Report on the Thirty-Fifth Annual Cognitive Science Conference

Anna Belardinelli; Martin V. Butz

COGSCI2013, the 35th annual meeting of the Cognitive Science Society and the first to take place in Germany, was held from the 31st of July to the 3rd of August. Cognitive scientists with varied backgrounds gathered in Berlin to report and discuss on expanding lines of research, spanning multiple fields but striving in one direction: to understand cognition with all its properties and peculiarities. A rich program featuring keynotes, symposia, workshops and tutorials, along regular oral and poster sessions, offered the attendees a vivid and exciting overview of where the discipline is going while serving as a fertile forum of interdisciplinary discussion and exchange. This report attempts to point out why this should matter to artificial intelligence as a whole.


robot and human interactive communication | 2006

Spatial discrimination in task-driven attention

Anna Belardinelli; Fiora Pirri; Andrea Carbone

Visual attention is becoming an increasingly imperative capability to endow computer vision systems and autonomous agents with. Starting from a biological inspired model of attention, we present an experiment aimed to study selective attention in 3D space. Depth has been proved to be an important feature affecting the way attention is deployed when observing a scene. We studied preferential scanning paths and fixation zones in a task-driven wandering of the tutor gaze over a scene where multiple targets had been disposed on different depth planes. We supposed that selective attention would aggregate targets in cliques that maximize utility, minimizing meanwhile visual effort produced when passing from closer planes to further planes or between different cluttered locations. By means of a purposely designed machine, we stored visual and motor data of the tutors head; we clustered different scanning paths of the gaze shifts according to velocity and space criteria to determine a preference model of attentional shifts and fixations. We propose subsequently a utility model that can formalize acquired information and establish a vision-based attentional framework for robots. We show that an interpretation of task-driven gaze orienting based on the presented preference criteria correctly accounts for the studied behaviours, as further reported in the literature

Collaboration


Dive into the Anna Belardinelli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fiora Pirri

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Andrea Carbone

Pierre-and-Marie-Curie University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jochen J. Steil

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge