Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alfredo F. Pereira is active.

Publication


Featured researches published by Alfredo F. Pereira.


Developmental Science | 2009

Developmental changes in visual object recognition between 18 and 24 months of age

Alfredo F. Pereira; Linda B. Smith

Two experiments examined developmental changes in childrens visual recognition of common objects during the period of 18 to 24 months. Experiment 1 examined childrens ability to recognize common category instances that presented three different kinds of information: (1) richly detailed and prototypical instances that presented both local and global shape information, color, textural and featural information, (2) the same rich and prototypical shapes but no color, texture or surface featural information, or (3) that presented only abstract and global representations of object shape in terms of geometric volumes. Significant developmental differences were observed only for the abstract shape representations in terms of geometric volumes, the kind of shape representation that has been hypothesized to underlie mature object recognition. Further, these differences were strongly linked in individual children to the number of object names in their productive vocabulary. Experiment 2 replicated these results and showed further that the less advanced childrens object recognition was based on the piecemeal use of individual features and parts, rather than overall shape. The results provide further evidence for significant and rapid developmental changes in object recognition during the same period children first learn object names. The implications of the results for theories of visual object recognition, the relation of object recognition to category learning, and underlying developmental processes are discussed.


Psychonomic Bulletin & Review | 2014

A bottom-up view of toddler word learning

Alfredo F. Pereira; Linda B. Smith; Chen Yu

A head camera was used to examine the visual correlates of object name learning by toddlers as they played with novel objects and as the parent spontaneously named those objects. The toddlers’ learning of the object names was tested after play, and the visual properties of the head camera images during naming events associated with learned and unlearned object names were analyzed. Naming events associated with learning had a clear visual signature, one in which the visual information itself was clean and visual competition among objects was minimized. Moreover, for learned object names, the visual advantage of the named target over competitors was sustained, both before and after the heard name. The findings are discussed in terms of the visual and cognitive processes that may depend on clean sensory input for learning and also on the sensory–motor, cognitive, and social processes that may create these optimal visual moments for learning.


IEEE Transactions on Autonomous Mental Development | 2009

Active Information Selection: Visual Attention Through the Hands

Chen Yu; Linda B. Smith; Hongwei Shen; Alfredo F. Pereira; Thomas G. Smith

An important goal in studying both human intelligence and artificial intelligence is to understand how a natural or an artificial learning system deals with the uncertainty and ambiguity of the real world. For a natural intelligence system such as a human toddler, the relevant aspects in a learning environment are only those that make contact with the learners sensory system. In real-world interactions, what the child perceives critically depends on his own actions as these actions bring information into and out of the learners sensory field. The present analyses indicate how, in the case of a toddler playing with toys, these perception-action loops may simplify the learning environment by selecting relevant information and filtering irrelevant information. This paper reports new findings using a novel method that seeks to describe the visual learning environment from a young childs point of view and measures the visual information that a child perceives in real-time toy play with a parent. The main results are 1) what the child perceives primarily depends on his own actions but also his social partners actions; 2) manual actions, in particular, play a critical role in creating visual experiences in which one object dominates; 3) this selecting and filtering of visual objects through the actions of the child provides more constrained and clean input that seems likely to facilitate cognitive learning processes. These findings have broad implications for how one studies and thinks about human and artificial learning systems.


Connection Science | 2008

Social coordination in toddler's word learning: interacting systems of perception and action

Alfredo F. Pereira; Linda B. Smith; Chen Yu

We measured turn-taking in terms of hand and head movements and asked if the global rhythm of the participants’ body activity relates to word learning. Six dyads composed of parents and toddlers (M=18 months) interacted in a tabletop task wearing motion-tracking sensors on their hands and head. Parents were instructed to teach the labels of 10 novel objects and the child was later tested on a name-comprehension task. Using dynamic time warping, we compared the motion data of all body-part pairs, within and between partners. For every dyad, we also computed an overall measure of the quality of the interaction, that takes into consideration the state of interaction when the parent uttered an object label and the overall smoothness of the turn-taking. The overall interaction quality measure was correlated with the total number of words learned. In particular, head movements were inversely related to other partners hand movements, and the degree of bodily coupling of parent and toddler predicted the words that children learned during the interaction. The implications of joint body dynamics to understanding joint coordination of activity in a social interaction, its scaffolding effect on the childs learning and its use in the development of artificial systems are discussed.


decision support systems | 2011

Collaborative Dynamic Decision Making: A Case Study from B2B Supplier Selection

Gianluca Campanella; Alfredo F. Pereira; Rita A. Ribeiro; Maria Leonilde Rocha Varela

The problem of supplier selection can be easily modeled as a multiple-criteria decision making (MCDM) problem: businesses express their preferences with respect to suppliers, which can then be ranked and selected. This approach has two major pitfalls: first, it does not consider a dynamic scenario, in which suppliers and their ratings are constantly changing; second, it only addressed the problem from the point of view of a single business, and cannot be easily applied when considering more than one business. To overcome these problems, we introduce a method for supplier selection that builds upon the dynamic MCDM framework of [1] and, by means of a linear programming model, can be used in the case of multiple collaborating businesses planning their next batch of orders together.


systems man and cybernetics | 2001

Simulation of a trading multi-agent system

Pedro Mariano; Alfredo F. Pereira; Luis M. Correia; Rita A. Ribeiro; V. Abramov; Nick B. Szirbik; Jbm Jan Goossenaerts; Tshilidzi Marwala; P. De Wilde

In a trading scenario agents interact with each other, selling and buying resources. In order to control the behavior of the trading scenario, the interactions must be coordinated. We present a brief discussion of communication types and coordination models applicable in multi-agent systems. We find a programmable tuple space more appropriate to manage and rule the interactions between the trading agents. We discuss the advantages of a trading agent model that deals with the trading strategy, concentrating on what to buy or sell. This relieves the agent from the task of coordinating the negotiations and their revoking or acceptances. This is the task of the programmable tuple space.


international conference on development and learning | 2008

Embodied solution: The world from a toddler’s point of view

Chen Yu; Linda B. Smith; Alfredo F. Pereira

An important goal in studying both human intelligence and artificial intelligence is an understanding of how a natural or artificial learning system deals with the uncertainty and ambiguity in the real world. We suggest that the relevant aspects in a learning environment for the learner are only those that make contact with the learnerpsilas sensory system. Moreover, in a real-world interaction, what the learner perceives in his sensory system critically depends on both his own and his social partnerpsilas actions, and his interactions with the world. In this way, the perception-action loops both within a learner and between the learner and his social partners may provide an embodied solution that significantly simplifies the social and physical learning environment, and filters irrelevant information for a current learning task which ultimately leads to successful learning. In light of this, we report new findings using a novel method that seeks to describe the visual learning environment from a young childpsilas point of view. The method consists of a multi-camera sensing environment consisting of two head-mounted mini cameras that are placed on both the childpsilas and the parentpsilas foreheads respectively. The main results are that (1) the adultpsilas and childpsilas views are fundamentally different when they interact in the same environment; (2) what the child perceives most often depends on his own actions and his social partnerpsilas actions; (3) the actions generated by both social partners provide more constrained and clean input to facilitate learning. These findings have broad implications for how one studies and thinks about human and artificial learning systems.


international conference on information visualization theory and applications | 2016

MUVTIME: a Multivariate time series visualizer for behavioral science

Emanuel Sousa; Tiago Malheiro; Estela Bicho; Wolfram Erlhagen; Jorge A. Santos; Alfredo F. Pereira

Marie Curie International Incoming Fellowship PIIF-GA-2011- 301155; Portuguese Foundation for Science and Technology (FCT) project PTDC/PSI- PCO/121494/2010; AFP was also partially funded by the FCT project (IF/00217/2013)


european conference on artificial life | 2007

From the outside-in: embodied attention in toddlers

Linda B. Smith; Chen Yu; Alfredo F. Pereira

An important goal in cognitive development research is an understanding of the real-world physical and social environment in which learning takes place. However, the relevant aspects of this environment for the learner are only those that make contact with the learners sensory system. We report new findings using a novel method that seeks to describe the visual learning environment from a young childs point of view. The method consists of a multicamera sensing environment consisting of two head-mounted mini cameras that are placed on both the childs and the parents foreheads respectively. The main results is that the adult and childs view are fundamentally different in that the childs view is more dynamic and centered on one object at time. These findings have broad implications for how one thinks about toddlers attentional task as opposed to adults. In one sense, toddlers have found cheap solution: Selectively attend not by changing internal weights by bringing the attended object close to your eyes so it is the only one in view.


Acta neuropsychologica: the official journal of the Polish Neuropsychological Society | 2017

Visual-vestibular and postural analysis of motion sickness, panic, and acrophobia

Carlos M. Coelho; Janete Silva; Alfredo F. Pereira; Emanuel Sousa; Nattasuda Taephant; Kullaya Pisitsungkagarn; Jorge A. Santos

National funds and co-financed by FEDER through COMPETE2020 under the PT2020 Partnership Agreement (POCI-01-0145-FEDER-007653). FCT/3599-PPCDT/121494/PT

Collaboration


Dive into the Alfredo F. Pereira's collaboration.

Top Co-Authors

Avatar

Linda B. Smith

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Susan S. Jones

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Hongwei Shen

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge