Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nida Latif is active.

Publication


Featured researches published by Nida Latif.


PLOS ONE | 2014

Movement Coordination during Conversation

Nida Latif; Adriano Vilela Barbosa; Monica S. Castelhano; Kevin G. Munhall

Behavioral coordination and synchrony contribute to a common biological mechanism that maintains communication, cooperation and bonding within many social species, such as primates and birds. Similarly, human language and social systems may also be attuned to coordination to facilitate communication and the formation of relationships. Gross similarities in movement patterns and convergence in the acoustic properties of speech have already been demonstrated between interacting individuals. In the present studies, we investigated how coordinated movements contribute to observers’ perception of affiliation (friends vs. strangers) between two conversing individuals. We used novel computational methods to quantify motor coordination and demonstrated that individuals familiar with each other coordinated their movements more frequently. Observers used coordination to judge affiliation between conversing pairs but only when the perceptual stimuli were restricted to head and face regions. These results suggest that observed movement coordination in humans might contribute to perceptual decisions based on availability of information to perceivers.


Autism Research | 2016

Looking, seeing and believing in autism: Eye movements reveal how subtle cognitive processing differences impact in the social domain.

Valerie Benson; Monica S. Castelhano; Philippa L. Howard; Nida Latif; Keith Rayner

Adults with High Functioning Autism Spectrum Disorders (ASD) viewed scenes with people in them, while having their eye movements recorded. The task was to indicate, using a button press, whether the pictures were normal, or in some way weird or odd. Oddities in the pictures were categorized as violations of either perceptual or social norms. Compared to a Typically Developed (TD) control group, the ASD participants were equally able to categorize the scenes as odd or normal, but they took longer to respond. The eye movement patterns showed that the ASD group made more fixations and revisits to the target areas in the odd scenes compared with the TD group. Additionally, when the ASD group first fixated the target areas in the scenes, they failed to initially detect the social oddities. These two findings have clear implications for processing difficulties in ASD for the social domain, where it is important to detect social cues on‐line, and where there is little opportunity to go back and recheck possible cues in fast dynamic interactions. Autism Res 2016, 9: 879–887.


Multisensory Research | 2017

Seeing the Way: the Role of Vision in Conversation Turn Exchange Perception

Nida Latif; Agnès Alsius; Kevin G. Munhall

During conversations, we engage in turn-taking behaviour that proceeds back and forth effortlessly as we communicate. In any given day, we participate in numerous face-to-face interactions that contain social cues from our partner and we interpret these cues to rapidly identify whether it is appropriate to speak. Although the benefit provided by visual cues has been well established in several areas of communication, the use of visual information to make turn-taking decisions during conversation is unclear. Here we conducted two experiments to investigate the role of visual information in identifying conversational turn exchanges. We presented clips containing single utterances spoken by single individuals engaged in a natural conversation with another. These utterances were from either right before a turn exchange (i.e., when the current talker would finish and the other would begin) or were utterances where the same talker would continue speaking. In Experiment 1, participants were presented audiovisual, auditory-only and visual-only versions of our stimuli and identified whether a turn exchange would occur or not. We demonstrated that although participants could identify turn exchanges with unimodal information alone, they performed best in the audiovisual modality. In Experiment 2, we presented participants audiovisual turn exchanges where the talker, the listener or both were visible. We showed that participants suffered a cost at identifying turns exchanges when visual cues from the listener were not available. Overall, we demonstrate that although auditory information is sufficient for successful conversation, visual information plays an important role in the overall efficiency of communication.


Attention Perception & Psychophysics | 2018

Knowing when to respond: the role of visual information in conversational turn exchanges

Nida Latif; Agnès Alsius; Kevin G. Munhall

When engaging in conversation, we efficiently go back and forth with our partner, organizing our contributions in reciprocal turn-taking behavior. Using multiple auditory and visual cues, we make online decisions about when it is the appropriate time to take our turn. In two experiments, we demonstrated, for the first time, that auditory and visual information serve complementary roles when making such turn-taking decisions. We presented clips of single utterances spoken by individuals engaged in conversations in audiovisual, auditory-only or visual-only modalities. These utterances occurred either right before a turn exchange (i.e., ‘Turn-Ends’) or right before the next sentence spoken by the same talker (i.e., ‘Turn-Continuations’). In Experiment 1, participants discriminated between Turn-Ends and Turn-Continuations in order to synchronize a button-press response to the moment the talker would stop speaking. We showed that participants were best at discriminating between Turn-Ends and Turn-Continuations in the audiovisual condition. However, in terms of response synchronization, participants were equally precise at timing their responses to a Turn-End in the audiovisual and auditory-only conditions, showing no advantage of visual information. In Experiment 2, we used a gating paradigm, where increasing segments of Turns-Ends and Turn-Continuations were presented, and participants predicted if a turn exchange would occur at the end of the sentence. We found an audiovisual advantage in detecting an upcoming turn early in the perception of a turn exchange. Together, these results suggest that visual information functions as an early signal indicating an upcoming turn exchange while auditory information is used to precisely time a response to the turn end.


Journal of Experimental Psychology: Human Perception and Performance | 2014

The art of gaze guidance

Nida Latif; Arlene Gehmacher; Monica S. Castelhano; Kevin G. Munhall


Journal of the Acoustical Society of America | 2017

Linguistic initiation signals increase auditory feedback error correction

Agnès Alsius; Takashi Mitsuya; Nida Latif; Kevin G. Munhall


Journal of Vision | 2018

The role of perceptual and contextual information in social event segmentation

Nida Latif; Francesca Capozzi; Jelena Ristic


Cognitive Science | 2016

In the event of a turn exchange: Visual information the perception of turn-taking behavior in natural conversation.

Nida Latif; Agnès Alsius; Kevin G. Munhall


Journal of Vision | 2015

How do we make social decisions? Gaze strategies used to predict and optimize social information during conversation.

Nida Latif; Mashal K. Haque; Monica S. Castelhano; Kevin G. Munhall


PLOS ONE | 2014

Three-dimensional average correlation difference distributions for A) friends-strangers, B) random-pair subtractions and C) real-pair random subtractions.

Nida Latif; Adriano Vilela Barbosa; Monica S. Castelhano; Kevin G. Munhall

Collaboration


Dive into the Nida Latif's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adriano Vilela Barbosa

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Valerie Benson

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Eric Vatikiotis-Batesom

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge