Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Moreno I. Coco is active.

Publication


Featured researches published by Moreno I. Coco.


Cognitive Science | 2012

Scan Patterns Predict Sentence Production in the Cross-Modal Processing of Visual Scenes.

Moreno I. Coco; Frank Keller

Most everyday tasks involve multiple modalities, which raises the question of how the processing of these modalities is coordinated by the cognitive system. In this paper, we focus on the coordination of visual attention and linguistic processing during speaking. Previous research has shown that objects in a visual scene are fixated before they are mentioned, leading us to hypothesize that the scan pattern of a participant can be used to predict what he or she will say. We test this hypothesis using a data set of cued scene descriptions of photo-realistic scenes. We demonstrate that similar scan patterns are correlated with similar sentences, within and between visual scenes; and that this correlation holds for three phases of the language production process (target identification, sentence planning, and speaking). We also present a simple algorithm that uses scan patterns to accurately predict associated sentences by utilizing similarity-based retrieval.


Quarterly Journal of Experimental Psychology | 2014

The interplay of bottom-up and top-down mechanisms in visual guidance during object naming

Moreno I. Coco; George L. Malcolm; Frank Keller

An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.


Frontiers in Psychology | 2013

The impact of attentional, linguistic, and visual features during object naming

Alasdair Clarke; Moreno I. Coco; Frank Keller

Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently.


Cognitive Processing | 2015

Integrating mechanisms of visual guidance in naturalistic language production

Moreno I. Coco; Frank Keller

Abstract Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye–voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention.


Quarterly Journal of Experimental Psychology | 2015

The interaction of visual and linguistic saliency during syntactic ambiguity resolution

Moreno I. Coco; Frank Keller

Psycholinguistic research using the visual world paradigm has shown that the processing of sentences is constrained by the visual context in which they occur. Recently, there has been growing interest in the interactions observed when both language and vision provide relevant information during sentence processing. In three visual world experiments on syntactic ambiguity resolution, we investigate how visual and linguistic information influence the interpretation of ambiguous sentences. We hypothesize that (1) visual and linguistic information both constrain which interpretation is pursued by the sentence processor, and (2) the two types of information act upon the interpretation of the sentence at different points during processing. In Experiment 1, we show that visual saliency is utilized to anticipate the upcoming arguments of a verb. In Experiment 2, we operationalize linguistic saliency using intonational breaks and demonstrate that these give prominence to linguistic referents. These results confirm prediction (1). In Experiment 3, we manipulate visual and linguistic saliency together and find that both types of information are used, but at different points in the sentence, to incrementally update its current interpretation. This finding is consistent with prediction (2). Overall, our results suggest an adaptive processing architecture in which different types of information are used when they become available, optimizing different aspects of situated language processing.


Psychonomic Bulletin & Review | 2016

When expectancies collide: Action dynamics reveal the interaction between stimulus plausibility and congruency

Moreno I. Coco; Nicholas D. Duran

The cognitive architecture routinely relies on expectancy mechanisms to process the plausibility of stimuli and establish their sequential congruency. In two computer mouse-tracking experiments, we use a cross-modal verification task to uncover the interaction between plausibility and congruency by examining their temporal signatures of activation competition as expressed in a computer- mouse movement decision response. In this task, participants verified the content congruency of sentence and scene pairs that varied in plausibility. The order of presentation (sentence-scene, scene-sentence) was varied between participants to uncover any differential processing. Our results show that implausible but congruent stimuli triggered less accurate and slower responses than implausible and incongruent stimuli, and were associated with more complex angular mouse trajectories independent of the order of presentation. This study provides novel evidence of a disassociation between the temporal signatures of plausibility and congruency detection on decision responses.


Topics in Cognitive Science | 2018

Performance in a Collaborative Search Task: The Role of Feedback and Alignment

Moreno I. Coco; Rick Dale; Frank Keller

When people communicate, they coordinate a wide range of linguistic and non-linguistic behaviors. This process of coordination is called alignment, and it is assumed to be fundamental to successful communication. In this paper, we question this assumption and investigate whether disalignment is a more successful strategy in some cases. More specifically, we hypothesize that alignment correlates with task success only when communication is interactive. We present results from a spot-the-difference task in which dyads of interlocutors have to decide whether they are viewing the same scene or not. Interactivity was manipulated in three conditions by increasing the amount of information shared between interlocutors (no exchange of feedback, minimal feedback, full dialogue). We use recurrence quantification analysis to measure the alignment between the scan-patterns of the interlocutors. We found that interlocutors who could not exchange feedback aligned their gaze more, and that increased gaze alignment correlated with decreased task success in this case. When feedback was possible, in contrast, interlocutors utilized it to better organize their joint search strategy by diversifying visual attention. This is evidenced by reduced overall alignment in the minimal feedback and full dialogue conditions. However, only the dyads engaged in a full dialogue increased their gaze alignment over time to achieve successful performances. These results suggest that alignment per se does not imply communicative success, as most models of dialogue assume. Rather, the effect of alignment depends on the type of alignment, on the goals of the task, and on the presence of feedback.


IEEE Transactions on Cognitive and Developmental Systems | 2017

Multilevel Behavioral Synchronization in a Joint Tower-Building Task

Moreno I. Coco; Leonardo Badino; Pietro Cipresso; Alice Chirico; Elisabetta Ferrari; Giuseppe Riva; Andrea Gaggioli; Alessandro D'Ausilio

Human to human sensorimotor interaction can only be fully understood by modeling the patterns of bodily synchronization and reconstructing the underlying mechanisms of optimal cooperation. We designed a tower-building task to address such a goal. We recorded upper body kinematics of dyads and focused on the velocity profiles of the head and wrist. We applied recurrence quantification analysis to examine the dynamics of synchronization within, and across the experimental trials, to compare the roles of leader and follower. Our results show that the leader was more auto-recurrent than the follower to make his/her behavior more predictable. When looking at the cross-recurrence of the dyad, we find different patterns of synchronization for head and wrist motion. On the wrist, dyads synchronized at short lags, and such a pattern was weakly modulated within trials, and invariant across them. Head motion, instead, synchronized at longer lags and increased both within and between trials: a phenomenon mostly driven by the leader. Our findings point at a multilevel nature of human to human sensorimotor synchronization, and may provide an experimentally solid benchmark to identify the basic primitives of motion, which maximize behavioral coupling between humans and artificial agents.


Neuropsychologia | 2017

Disentangling stimulus plausibility and contextual congruency: Electro-physiological evidence for differential cognitive dynamics

Moreno I. Coco; Susana Araújo; Karl Magnus Petersson

ABSTRACT Expectancy mechanisms are routinely used by the cognitive system in stimulus processing and in anticipation of appropriate responses. Electrophysiology research has documented negative shifts of brain activity when expectancies are violated within a local stimulus context (e.g., reading an implausible word in a sentence) or more globally between consecutive stimuli (e.g., a narrative of images with an incongruent end). In this EEG study, we examine the interaction between expectancies operating at the level of stimulus plausibility and at more global level of contextual congruency to provide evidence for, or against, a disassociation of the underlying processing mechanisms. We asked participants to verify the congruency of pairs of cross‐modal stimuli (a sentence and a scene), which varied in plausibility. ANOVAs on ERP amplitudes in selected windows of interest show that congruency violation has longer‐lasting (from 100 to 500 ms) and more widespread effects than plausibility violation (from 200 to 400 ms). We also observed critical interactions between these factors, whereby incongruent and implausible pairs elicited stronger negative shifts than their congruent counterpart, both early on (100–200 ms) and between 400–500 ms. Our results suggest that the integration mechanisms are sensitive to both global and local effects of expectancy in a modality independent manner. Overall, we provide novel insights into the interdependence of expectancy during meaning integration of cross‐modal stimuli in a verification task. HIGHLIGHTSWe investigate violations of stimulus plausibility and congruency using EEG.Participants verify the congruency between sentence‐scene varying in plausibility.Congruency violations are stronger (100ms, 300ms, 400ms) than plausibility (200ms, 300ms).Violations of both factors result into the largest processing costs (100ms, 400ms).Interdependent mechanisms are employed to process plausibility and congruency.


Autism & Developmental Language Impairments | 2018

Impaired implicit learning of syntactic structure in children with developmental language disorder: Evidence from syntactic priming:

Maria Garraffa; Moreno I. Coco; Holly P. Branigan

Background and aims Implicit learning mechanisms associated with detecting structural regularities have been proposed to underlie both the long-term acquisition of linguistic structure and a short-term tendency to repeat linguistic structure across sentences (structural priming) in typically developing children. Recent research has suggested that a deficit in such mechanisms may explain the inconsistent trajectory of language learning displayed by children with Developmental Learning Disorder. We used a structural priming paradigm to investigate whether a group of children with Developmental Learning Disorder showed impaired implicit learning of syntax (syntactic priming) following individual syntactic experiences, and the time course of any such effects. Methods Five- to six-year-old Italian-speaking children with Developmental Learning Disorder and typically developing age-matched and language-matched controls played a picture-description-matching game with an experimenter. The experimenter’s descriptions were systematically manipulated so that children were exposed to both active and passive structures, in a randomized order. We investigated whether children’s descriptions used the same abstract syntax (active or passive) as the experimenter had used on an immediately preceding turn (no-delay) or three turns earlier (delay). We further examined whether children’s syntactic production changed with increasing experience of passives within the experiment. Results Children with Developmental Learning Disorder’s syntactic production was influenced by the syntax of the experimenter’s descriptions in the same way as typically developing language-matched children, but showed a different pattern from typically developing age-matched children. Children with Developmental Learning Disorder were more likely to produce passive syntax immediately after hearing a passive sentence than an active sentence, but this tendency was smaller than in typically developing age-matched children. After two intervening sentences, children with Developmental Learning Disorder no longer showed a significant syntactic priming effect, whereas typically developing age-matched children did. None of the groups showed a significant effect of cumulative syntactic experience. Conclusions Children with Developmental Learning Disorder show a pattern of syntactic priming effects that is consistent with an impairment in implicit learning mechanisms that are associated with the detection and extraction of abstract structural regularities in linguistic input. Results suggest that this impairment involves reduced initial learning from each syntactic experience, rather than atypically rapid decay following intact initial learning. Implications Children with Developmental Learning Disorder may learn less from each linguistic experience than typically developing children, and so require more input to achieve the same learning outcome with respect to syntax. Structural priming is an effective technique for manipulating both input quality and quantity to determine precisely how Developmental Learning Disorder is related to language input, and to investigate how input tailored to take into account the cognitive profile of this population can be optimised in designing interventions.

Collaboration


Dive into the Moreno I. Coco's collaboration.

Top Co-Authors

Avatar

Frank Keller

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rick Dale

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George L. Malcolm

George Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge