Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Swapnaa Jayaraman is active.

Publication


Featured researches published by Swapnaa Jayaraman.


Cognition | 2016

From faces to hands: Changing visual input in the first two years

Caitlin M. Fausey; Swapnaa Jayaraman; Linda B. Smith

Human development takes place in a social context. Two pervasive sources of social information are faces and hands. Here, we provide the first report of the visual frequency of faces and hands in the everyday scenes available to infants. These scenes were collected by having infants wear head cameras during unconstrained everyday activities. Our corpus of 143hours of infant-perspective scenes, collected from 34 infants aged 1month to 2years, was sampled for analysis at 1/5Hz. The major finding from this corpus is that the faces and hands of social partners are not equally available throughout the first two years of life. Instead, there is an earlier period of dense face input and a later period of dense hand input. At all ages, hands in these scenes were primarily in contact with objects and the spatio-temporal co-occurrence of hands and faces was greater than expected by chance. The orderliness of the shift from faces to hands suggests a principled transition in the contents of visual experiences and is discussed in terms of the role of developmental gates on the timing and statistics of visual experiences.


PLOS ONE | 2015

The Faces in Infant-Perspective Scenes Change over the First Year of Life

Swapnaa Jayaraman; Caitlin M. Fausey; Linda B. Smith

Mature face perception has its origins in the face experiences of infants. However, little is known about the basic statistics of faces in early visual environments. We used head cameras to capture and analyze over 72,000 infant-perspective scenes from 22 infants aged 1-11 months as they engaged in daily activities. The frequency of faces in these scenes declined markedly with age: for the youngest infants, faces were present 15 minutes in every waking hour but only 5 minutes for the oldest infants. In general, the available faces were well characterized by three properties: (1) they belonged to relatively few individuals; (2) they were close and visually large; and (3) they presented views showing both eyes. These three properties most strongly characterized the face corpora of our youngest infants and constitute environmental constraints on the early development of the visual system.


Developmental Psychology | 2017

Why are faces denser in the visual experiences of younger than older infants

Swapnaa Jayaraman; Caitlin M. Fausey; Linda B. Smith

Recent evidence from studies using head cameras suggests that the frequency of faces directly in front of infants declines over the first year and a half of life, a result that has implications for the development of and evolutionary constraints on face processing. Two experiments tested 2 opposing hypotheses about this observed age-related decline in the frequency of faces in infant views. By the people-input hypothesis, there are more faces in view for younger infants because people are more often physically in front of younger than older infants. This hypothesis predicts that not just faces but views of other body parts will decline with age. By the face-input hypothesis, the decline is strictly about faces, not people or other body parts in general. Two experiments, 1 using a time-sampling method (84 infants, 3 to 24 months in age) and the other analyses of head camera images (36 infants, 1 to 24 months) provide strong support for the face-input hypothesis. The results suggest developmental constraints on the environment that ensure faces are prevalent early in development.


Trends in Cognitive Sciences | 2018

The Developing Infant Creates a Curriculum for Statistical Learning

Linda B. Smith; Swapnaa Jayaraman; Elizabeth M. Clerkin; Chen Yu

New efforts are using head cameras and eye-trackers worn by infants to capture everyday visual environments from the point of view of the infant learner. From this vantage point, the training sets for statistical learning develop as the sensorimotor abilities of the infant develop, yielding a series of ordered datasets for visual learning that differ in content and structure between timepoints but are highly selective at each timepoint. These changing environments may constitute a developmentally ordered curriculum that optimizes learning across many domains. Future advances in computational models will be necessary to connect the developmentally changing content and statistics of infant experience to the internal machinery that does the learning.


Archive | 2015

Neural Bases for Social Attention in Healthy Humans

Aina Puce; Marianne Latinus; Alejandra Rossi; Elizabeth daSilva; Francisco J. Parada; Scott A. Love; Arian Ashourvan; Swapnaa Jayaraman

In this chapter we focus on the neural processes that occur in the mature healthy human brain in response to evaluating another’s social attention. We first examine the brain’s sensitivity to gaze direction of others, social attention (as typically indicated by gaze contact), and joint attention. Brain regions such as the superior temporal sulcus (STS), the amygdala, and the fusiform gyrus have been previously demonstrated to be sensitive to gaze changes, most frequently with functional magnetic resonance imaging (fMRI). Neurophysiological investigations, using electroencephalography (EEG) and magnetoencephalography (MEG), have identified event-related potentials (ERPs) such as the N170 that are sensitive to changes in gaze direction and head direction. We advance a putative model that explains findings relating to the neurophysiology of social attention , based mainly on our studies. This model proposes two brain modes of social information processing—a nonsocial “Default” mode and a social mode that we have named “Socially Aware”. In Default mode, there is an internal focus on executing actions to achieve our goals, as evident in studies in which passive viewing or tasks involving nonsocial judgments have been used. In contrast, Socially Aware mode is active when making explicit social judgments. Switching between these two modes is rapid and can occur via either top-down or bottom-up routes. From a different perspective, most of the literature, including our own studies, has focused on social attention phenomena as experienced from the first-person perspective, i.e., gaze changes or social attention directed at, or away from, the observer. However, in daily life we are actively involved in observing social interactions between others, where their social attention focus may not include us, or their gaze may not meet ours. Hence, changes in eye gaze and social attention are experienced from the third-person perspective. This area of research is still fairly small, but nevertheless important in the study of social and joint attention, and we discuss this very small literature briefly at the end of the chapter. We conclude the chapter with some outstanding questions, which are aimed at the main knowledge gaps in the literature.


Development and Learning and Epigenetic Robotics (ICDL-Epirob), 2014 Joint IEEE International Conferences on | 2014

Special session: Dynamic interactions between visual experiences, actions and word learning

Beata J. Grzyb; Allegra Cattani; Angelo Cangelosi; Caroline Floccia; Hanako Yoshida; Joseph M. Burling; Anna M. Borghi; Swapnaa Jayaraman; Linda B. Smith; Alfredo F. Pereira; Isabel C. Lisboa; Emanuel Sousa; Jorge A. Santos; Wolfram Erlhagen; Estela Bicho

The primary aim of this special session is to inform the conferences interdisciplinary audience about the state-of-the-art in developmental studies of action and language interactions. Action and language develop in parallel, impacting each other, and as such, bootstrap action, social, and cognitive development. We will present recent empirical evidence on developmental dependencies between visual experiences and word learning, followed by discussion of potential implications of this research for embodied theories of action and language integration.


joint ieee international conference on development and learning and epigenetic robotics | 2015

What accounts for developmental shifts in optic flow sensitivity

Rick O. Gilmore; Florian Raudies; Swapnaa Jayaraman


Developmental Science | 2014

Redundant constraints on human face perception

Linda B. Smith; Swapnaa Jayaraman


Journal of Vision | 2013

Visual statistics of infants’ ordered experiences

Swapnaa Jayaraman; Caitlin M. Fausey; Linda B. Smith


Cognitive Science | 2013

Developmental See-Saws: Ordered visual input in the first two years of life

Swapnaa Jayaraman; Caitlin M. Fausey; Linda B. Smith

Collaboration


Dive into the Swapnaa Jayaraman's collaboration.

Top Co-Authors

Avatar

Linda B. Smith

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rick O. Gilmore

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alejandra Rossi

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Alfredo F. Pereira

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Allegra Cattani

Plymouth State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arian Ashourvan

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge