Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mario Aguilar is active.

Publication


Featured researches published by Mario Aguilar.


Journal of Vision | 2010

Pupil dilation during visual target detection

Claudio M. Privitera; Laura Walker Renninger; Thom Carney; Stanley A. Klein; Mario Aguilar

It has long been documented that emotional and sensory events elicit a pupillary dilation. Is the pupil response a reliable marker of a visual detection event while viewing complex imagery? In two experiments where viewers were asked to report the presence of a visual target during rapid serial visual presentation (RSVP), pupil dilation was significantly associated with target detection. The amplitude of the dilation depended on the frequency of targets and the time of target presentation relative to the start of the trial. Larger dilations were associated with trials having fewer targets and with targets viewed earlier in the run. We found that dilation was influenced by, but not dependent on, the requirement of a button press. Interestingly, we also found that dilation occurred when viewers fixated a target but did not report seeing it. We will briefly discuss the role of noradrenaline in mediating these pupil behaviors.


Cognitive Neuropsychology | 2016

Toward a brain-based componential semantic representation

Jeffrey R. Binder; Lisa L. Conant; Colin Humphries; Leonardo Fernandino; Stephen B. Simons; Mario Aguilar; Rutvik H. Desai

ABSTRACT Componential theories of lexical semantics assume that concepts can be represented by sets of features or attributes that are in some sense primitive or basic components of meaning. The binary features used in classical category and prototype theories are problematic in that these features are themselves complex concepts, leaving open the question of what constitutes a primitive feature. The present availability of brain imaging tools has enhanced interest in how concepts are represented in brains, and accumulating evidence supports the claim that these representations are at least partly “embodied” in the perception, action, and other modal neural systems through which concepts are experienced. In this study we explore the possibility of devising a componential model of semantic representation based entirely on such functional divisions in the human brain. We propose a basic set of approximately 65 experiential attributes based on neurobiological considerations, comprising sensory, motor, spatial, temporal, affective, social, and cognitive experiences. We provide normative data on the salience of each attribute for a large set of English nouns, verbs, and adjectives, and show how these attribute vectors distinguish a priori conceptual categories and capture semantic similarity. Robust quantitative differences between concrete object categories were observed across a large number of attribute dimensions. A within- versus between-category similarity metric showed much greater separation between categories than representations derived from distributional (latent semantic) analysis of text. Cluster analyses were used to explore the similarity structure in the data independent of a priori labels, revealing several novel category distinctions. We discuss how such a representation might deal with various longstanding problems in semantic theory, such as feature selection and weighting, representation of abstract concepts, effects of context on semantic retrieval, and conceptual combination. In contrast to componential models based on verbal features, the proposed representation systematically relates semantic content to large-scale brain networks and biologically plausible accounts of concept acquisition.


IEEE Transactions on Biomedical Engineering | 2009

Decision-Level Fusion of EEG and Pupil Features for Single-Trial Visual Detection Analysis

Ming Qian; Mario Aguilar; Karen N. Zachery; Claudio M. Privitera; Stanley A. Klein; Thom Carney; Loren W. Nolte

Several recent studies have reported success in applying EEG-based signal analysis to achieve accurate single-trial classification of responses to visual target detection. Pupil responses are proposed as a complementary modality that can support improved accuracy of single-trial signal analysis. We develop a pupillary response feature-extraction and -selection procedure that helps to improve the classification performance of a system based only on EEG signal analysis. We apply a two-level linear classifier to obtain cognitive-task-related analysis of EEG and pupil responses. The classification results based on the two modalities are then fused at the decision level. Here, the goal is to support increased classification confidence through the inherent modality complementarities. The fusion results show significant improvement over classification performance based on a single modality.


Cerebral Cortex | 2016

Predicting Neural Activity Patterns Associated with Sentences Using a Neurobiologically Motivated Model of Semantic Representation

Andrew J. Anderson; Jeffrey R. Binder; Leonardo Fernandino; Colin Humphries; Lisa L. Conant; Mario Aguilar; Xixi Wang; Donias Doko; Rajeev D. S. Raizada

We introduce an approach that predicts neural representations of word meanings contained in sentences then superposes these to predict neural representations of new sentences. A neurobiological semantic model based on sensory, motor, social, emotional, and cognitive attributes was used as a foundation to define semantic content. Previous studies have predominantly predicted neural patterns for isolated words, using models that lack neurobiological interpretation. Fourteen participants read 240 sentences describing everyday situations while undergoing fMRI. To connect sentence-level fMRI activation patterns to the word-level semantic model, we devised methods to decompose the fMRI data into individual words. Activation patterns associated with each attribute in the model were then estimated using multiple-regression. This enabled synthesis of activation patterns for trained and new words, which were subsequently averaged to predict new sentences. Region-of-interest analyses revealed that prediction accuracy was highest using voxels in the left temporal and inferior parietal cortex, although a broad range of regions returned statistically significant results, showing that semantic information is widely distributed across the brain. The results show how a neurobiologically motivated semantic model can decompose sentence-level fMRI data into activation features for component words, which can be recombined to predict activation patterns for new sentences.


human vision and electronic imaging conference | 2008

The pupil dilation response to visual detection

Claudio M. Privitera; Laura Walker Renninger; Thom Carney; Stanley A. Klein; Mario Aguilar

The pupil dilation reflex is mediated by inhibition of the parasympathetic Edinger-Westphal oculomotor complex and sympathetic activity. It has long been documented that emotional and sensory events elicit a pupillary reflex dilation. Is the pupil response a reliable marker of a visual detection event? In two experiments where viewers were asked to report the presence of a visual target during rapid serial visual presentation (RSVP), pupil dilation was significantly associated with target detection. The amplitude of the dilation depended on the frequency of targets and the time of the detection. Larger dilations were associated with trials having fewer targets and with targets viewed earlier during the trial. We also found that dilation was strongly influenced by the visual task.


Vision Research | 2014

Analysis of microsaccades and pupil dilation reveals a common decisional origin during visual search.

Claudio M. Privitera; Thom Carney; Stanley A. Klein; Mario Aguilar

During free viewing visual search, observers often refixate the same locations several times before and after target detection is reported with a button press. We analyzed the rate of microsaccades in the sequence of refixations made during visual search and found two important components. One related to the visual content of the region being fixated; fixations on targets generate more microsaccades and more microsaccades are generated for those targets that are more difficult to disambiguate. The other empathizes non-visual decisional processes; fixations containing the button press generate more microsaccades than those made on the same target but without the button press. Pupil dilation during the same refixations reveals a similar modulation. We inferred that generic sympathetic arousal mechanisms are part of the articulated complex of perceptual processes governing fixational eye movements.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Liquid Crystal Tunable Polarization Filter for Target Detection Applications

Bruce K. Winker; Dong-Feng Gu; Bing Wen; Karen N. Zachery; John E. Mansell; Donald B. Taber; Keith Sage; William J. Gunning; Mario Aguilar

Many natural materials produce polarization signatures, but man-made objects, typically having more planar or smoother surfaces, tend to produce relatively strong polarization signatures. These signatures, when used in combination with other means, can significantly aid in the detection of man-made objects. To explore the utility of polarization signatures for target detection applications we have developed a new type of polarimetric imaging sensor based on tunable liquid crystal components. Current state-of-the-art polarimetric sensors employ numerous types of imaging polarimeters, the most common of which are aperture division, micropolarizer, and rotating polarizer/analyzer. Our design uses an electronically tunable device that rotates the polarization of incoming light followed by a single fixed oriented linear polarizer. Its unique features include: 1) sub-millisecond response time switching speed, 2) ~75% transmission throughput, 3) no loss of sensor resolution, 4) zero mechanical moving parts, 5) broadband (~75% of center wavelength), 6) ~100:1 contrast ratio, 7) wide acceptance angle (±10°), and 8) compact and monolithic architecture (~10 inch3). This paper summarizes our tunable liquid crystal polarimetric imaging sensor architecture, benefits of our design, analysis of laboratory and field data, and the applicability of polarization signatures in target detection applications.


Frontiers in Human Neuroscience | 2018

Closed-Loop Targeted Memory Reactivation during Sleep Improves Spatial Navigation

Renee E. Shimizu; Patrick M. Connolly; Nicola Cellini; Diana M. Armstrong; Lexus T. Hernandez; Rolando Estrada; Mario Aguilar; Michael P. Weisend; Sara C. Mednick; Stephen B. Simons

Sounds associated with newly learned information that are replayed during non-rapid eye movement (NREM) sleep can improve recall in simple tasks. The mechanism for this improvement is presumed to be reactivation of the newly learned memory during sleep when consolidation takes place. We have developed an EEG-based closed-loop system to precisely deliver sensory stimulation at the time of down-state to up-state transitions during NREM sleep. Here, we demonstrate that applying this technology to participants performing a realistic navigation task in virtual reality results in a significant improvement in navigation efficiency after sleep that is accompanied by increases in the spectral power especially in the fast (12–15 Hz) sleep spindle band. Our results show promise for the application of sleep-based interventions to drive improvement in real-world tasks.


IEEE Pulse | 2012

Real-Time Unconstrained Object Recognition: A Processing Pipeline Based on the Mammalian Visual System

Mario Aguilar; Mark Peot; Jiangying Zhou; Stephen B. Simons; Yuwei Liao; Nader Metwalli; Mark B. Anderson

The mammalian visual system is still the gold standard for recognition accuracy, flexibility, efficiency, and speed. Ongoing advances in our understanding of function and mechanisms in the visual system can now be leveraged to pursue the design of computer vision architectures that will revolutionize the state of the art in computer vision.


Archive | 2009

FIXATION-LOCKED MEASUREMENT OF BRAIN RESPONSES TO STIMULI

Mario Aguilar; Aaron T. Hawkins; Patrick M. Connolly; Ming Qian

Collaboration


Dive into the Mario Aguilar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thom Carney

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge