Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bria Long is active.

Publication


Featured researches published by Bria Long.


Journal of Experimental Psychology: General | 2016

Mid-level perceptual features distinguish objects of different real-world sizes

Bria Long; Talia Konkle; Michael A. Cohen; George A. Alvarez

Understanding how perceptual and conceptual representations are connected is a fundamental goal of cognitive science. Here, we focus on a broad conceptual distinction that constrains how we interact with objects--real-world size. Although there appear to be clear perceptual correlates for basic-level categories (apples look like other apples, oranges look like other oranges), the perceptual correlates of broader categorical distinctions are largely unexplored, i.e., do small objects look like other small objects? Because there are many kinds of small objects (e.g., cups, keys), there may be no reliable perceptual features that distinguish them from big objects (e.g., cars, tables). Contrary to this intuition, we demonstrated that big and small objects have reliable perceptual differences that can be extracted by early stages of visual processing. In a series of visual search studies, participants found target objects faster when the distractor objects differed in real-world size. These results held when we broadly sampled big and small objects, when we controlled for low-level features and image statistics, and when we reduced objects to texforms--unrecognizable textures that loosely preserve an objects form. However, this effect was absent when we used more basic textures. These results demonstrate that big and small objects have reliably different mid-level perceptual features, and suggest that early perceptual information about broad-category membership may influence downstream object perception, recognition, and categorization processes.


Frontiers in Psychology | 2013

Insights on NIRS Sensitivity from a Cross-Linguistic Study on the Emergence of Phonological Grammar.

Yasuyo Minagawa-Kawai; Alejandrina Cristia; Bria Long; Inga Vendelin; Yoko Hakuno; Michel Dutat; Luca Filippin; D. Cabrol; Emmanuel Dupoux

Each language has a unique set of phonemic categories and phonotactic rules which determine permissible sound sequences in that language. Behavioral research demonstrates that one’s native language shapes the perception of both sound categories and sound sequences in adults, and neuroimaging results further indicate that the processing of native phonemes and phonotactics involves a left-dominant perisylvian brain network. Recent work using a novel technique, functional Near InfraRed Spectroscopy (NIRS), has suggested that a left-dominant network becomes evident toward the end of the first year of life as infants process phonemic contrasts. The present research project attempted to assess whether the same pattern would be seen for native phonotactics. We measured brain responses in Japanese- and French-learning infants to two contrasts: Abuna vs. Abna (a phonotactic contrast that is native in French, but not in Japanese) and Abuna vs. Abuuna (a vowel length contrast that is native in Japanese, but not in French). Results did not show a significant response to either contrast in either group, unlike both previous behavioral research on phonotactic processing and NIRS work on phonemic processing. To understand these null results, we performed similar NIRS experiments with Japanese adult participants. These data suggest that the infant null results arise from an interaction of multiple factors, involving the suitability of the experimental paradigm for NIRS measurements and stimulus perceptibility. We discuss the challenges facing this novel technique, particularly focusing on the optimal stimulus presentation which could yield strong enough hemodynamic responses when using the change detection paradigm.


Proceedings of the National Academy of Sciences of the United States of America | 2018

Mid-level visual features underlie the high-level categorical organization of the ventral stream

Bria Long; Chen-Ping Yu; Talia Konkle

Significance While neural responses to object categories are remarkably systematic across human visual cortex, the nature of these responses has been hotly debated for the past 20 y. In this paper, a class of stimuli (texforms) is used to examine how mid-level features contribute to the large-scale organization of the ventral visual stream. Despite their relatively primitive visual appearance, these unrecognizable texforms elicited the entire large-scale organizations of the ventral stream by animacy and object size. This work demonstrates that much of ventral stream organization can be explained by relatively primitive mid-level features without requiring explicit recognition of the objects themselves. Human object-selective cortex shows a large-scale organization characterized by the high-level properties of both animacy and object size. To what extent are these neural responses explained by primitive perceptual features that distinguish animals from objects and big objects from small objects? To address this question, we used a texture synthesis algorithm to create a class of stimuli—texforms—which preserve some mid-level texture and form information from objects while rendering them unrecognizable. We found that unrecognizable texforms were sufficient to elicit the large-scale organizations of object-selective cortex along the entire ventral pathway. Further, the structure in the neural patterns elicited by texforms was well predicted by curvature features and by intermediate layers of a deep convolutional neural network, supporting the mid-level nature of the representations. These results provide clear evidence that a substantial portion of ventral stream organization can be accounted for by coarse texture and form information without requiring explicit recognition of intact objects.


Journal of Vision | 2015

Real-world object size is automatically activated by mid-level shape features

Bria Long; Talia Konkle; George A. Alvarez

When we recognize an object, we automatically know how big it is in the world (Konkle & Oliva, 2012). Here we asked whether this automatic activation relies on explicit recognition at the basic level category of the object, or whether it can be triggered by mid-level visual features. To explore this question, we gathered images of big and small objects (e.g. car, shoe), and then generated texture stimuli by coercing white noise to match the mid-level image statistics of the original objects (Freeman & Simoncelli, 2011). Behavioral ratings confirmed that these textures were unidentifiable at the basic-level (N=30, 2.8% SD: 4%). In Experiment 1, participants made a speeded judgment about which of two textures was visually bigger or smaller on the screen. Critically, the visual sizes of the textures were either congruent or incongruent with real-world sizes of the original images. Participants were faster at judging the visual size of the texture when its original size was congruent (M=504 ms) vs. incongruent (M=517 ms) with its size on the screen (t(15)=3.79, p< .01). This result suggests that these texture stimuli preserve shape features that are diagnostic of real-world size and automatically activate this association. Consistent with this interpretation, we found that a new set of observers could correctly classify these textures as big or small in the real-world at a rate slightly above chance (N=30, small objects, 63.2%, big objects: 56.4%), and that the magnitude of the Stroop effect was greater when judging textures that were more consistently associated with big or small real-world sizes (F(1, 23)=38, p< 0.001). Taken together, these results suggest that mid-level visual features are sufficient to automatically activate real-world size information. Meeting abstract presented at VSS 2015.


Journal of the Acoustical Society of America | 2010

Neural correlates of dialect perception in early infancy.

Natalia Egorova; Alejandrina Cristia; Inga Vendelin; Luca Filippin; Bria Long; Judit Gervain; Yasuyo Minagawa-Kawai; Emmanuel Dupoux

A wealth of behavioral research suggests that infants become increasingly specialized in their native dialect/language in infancy. In contrast, few studies document how this early specialization is reflected in neural activation, most of which have compared familiar and unfamiliar languages, and none focused on different dialects. This study aimed to fill that gap, focusing on cerebral activation in temporal areas, as measured with near infrared spectroscopy. Audiovisual infant‐directed speech was recorded from talkers of either Parisian or Quebecois French. These videos were presented to 5‐month‐old Parisian infants in blocks within which videos from two talkers alternated in one of two ways. In pure blocks, both talkers were either Parisian (pure‐familiar) or Quebecois (pure‐unfamiliar). In mixed blocks, the two talkers had different dialects. A robust and mostly bilateral activation was found for both mixed and pure blocks. Follow‐up comparisons revealed a stronger activation for mixed blocks than for ...


Journal of Vision | 2017

Mid-level perceptual features contain early cues to animacy

Bria Long; Viola S. Störmer; George A. Alvarez


Cognition | 2017

A familiar-size Stroop effect in the absence of basic-level recognition

Bria Long; Talia Konkle


Journal of Vision | 2015

Animate shape features influence high-level animate categorization

Abla Alaoui Soce; Bria Long; George A. Alvarez


Journal of Vision | 2017

Mid-level features are sufficient to drive the animacy and object size organization of the ventral stream

Bria Long; Talia Konkle


Journal of Vision | 2016

Pre-verbal infants automatically activate real-world object size information

Bria Long; Susan Carey; Talia Konkle

Collaboration


Dive into the Bria Long's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emmanuel Dupoux

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Luca Filippin

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge