Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ori Amir is active.

Publication


Featured researches published by Ori Amir.


Cerebral Cortex | 2015

Ha Ha! Versus Aha! A Direct Comparison of Humor to Nonhumorous Insight for Determining the Neural Correlates of Mirth

Ori Amir; Irving Biederman; Zhuangjun Wang; Xiaokun Xu

While humor typically involves a surprising discovery, not all discoveries are perceived as humorous or lead to a feeling of mirth. Is there a difference in the neural signature of humorous versus nonhumorous discovery? Subjects viewed drawings that were uninterpretable until a caption was presented that provided either: 1) a nonhumorous interpretation (or insight) of an object from an unusual or partial view (UV) or 2) a humorous interpretation (HU) of the image achieved by linking remote and unexpected concepts. fMRI activation elicited by the UV captions was a subset of that elicited by the humorous HU captions, with only the latter showing activity in the temporal poles and temporo-occipital junction (linking remote concepts), and medial prefrontal cortex (unexpected reward). Mirth may be a consequence of the linking of remote ideas producing high-and unexpected-activation in association and classical reward areas. We suggest that this process is mediated by opioid activity as part of a system rewarding attention to novel information.


Vision Research | 2011

The neural basis for shape preferences.

Ori Amir; Irving Biederman; Kenneth J. Hayworth

Several dimensions of shape, such as curvature or taper, can be regarded as extending from a singular (S) or 0 value (e.g., a straight contour with 0 curvature or parallel contours with a 0 angle of convergence) to an infinity of non-singular (NS) values (e.g., curves and non-parallel contours). As orientation in depth is varied, an S value remains S, and a NS value will vary but will remain NS. Infant and adult human participants viewed pairs of geons where one member had an S and the other had a NS value on a given shape dimension, e.g., a cylinder vs. a cone. Both adults and infants looked first, and adults looked longer at the NS geons. The NS geons also produced greater fMRI activation in shape selective cortex (LOC), a result consistent with the greater single unit activity in macaque IT produced by those geons (Kayaert et al., 2005). That NS stimuli elicit higher neural activity and attract eye movements may account for search asymmetries in that these stimuli pop out from their S distractors but not the reverse. A positive association between greater activation in higher-level areas of the ventral pathway and visual preference has been demonstrated previously for real world scenes (Yue, Vessel, & Biederman, 2007) and may reflect the workings of a motivational system that leads humans to seek novel but richly interpretable information.


Vision Research | 2012

Sensitivity to nonaccidental properties across various shape dimensions.

Ori Amir; Irving Biederman; Kenneth J. Hayworth

Nonaccidental properties (NAPs) are image properties that are invariant over orientation in depth and are distinguished from metric properties (MPs) that can change continuously with variations over depth orientation. To a large extent NAPs allow facile recognition of objects at novel viewpoints. Two match-to-sample experiments with 2D or 3D appearing geons assessed sensitivity to NAP vs. MP differences. A matching geon was always identical to the sample and the distractor differed from the matching geon in either a NAP or an MP on a single generalized cone dimension. For example, if the sample was a cylinder with a slightly curved axis, the NAP distractor would have a straight axis and the MP distractor would have an axis of greater curvature than the sample. Critically, the NAP and MP differences were scaled so that the MP differences were slightly greater according to pixel energy and Gabor wavelet measures of dissimilarity. Exp. 1 used a staircase procedure to determine the threshold presentation time required to achieve 75% accuracy. Exp. 2 used a constant, brief display presentation time with reaction times and error rates as dependent measures. Both experiments revealed markedly greater sensitivity to NAP over MP differences, and this was generally true for the individual dimensions. The NAP advantage was not reflected in the similarity computations of the C2 stage of HMAX, a widely cited model of later stage cortical ventral stream processing.


Vision Research | 2012

Predicting the psychophysical similarity of faces and non-face complex shapes by image-based measures

Xiaomin Yue; Irving Biederman; Michael Mangini; Christoph von der Malsburg; Ori Amir

Shape representation is accomplished by a series of cortical stages in which cells in the first stage (V1) have local receptive fields tuned to contrast at a particular scale and orientation, each well modeled as a Gabor filter. In succeeding stages, the representation becomes largely invariant to Gabor coding (Kobatake & Tanaka, 1994). Because of the non-Gabor tuning in these later stages, which must be engaged for a behavioral response (Tong, 2003; Tong et al., 1998), a V1-based measure of shape similarity based on Gabor filtering would not be expected to be highly correlated with human performance when discriminating complex shapes (faces and teeth-like blobs) that differ metrically on a two-choice, match-to-sample task. Here we show that human performance is highly correlated with Gabor-based image measures (Gabor simple and complex cells), with values often in the mid 0.90s, even without discounting the variability in the speed and accuracy of performance not associated with the similarity of the distractors. This high correlation is generally maintained through the stages of HMAX, a model that builds upon the Gabor metric and develops units for complex features and larger receptive fields. This is the first report of the psychophysical similarity of complex shapes being predictable from a biologically motivated, physical measure of similarity. As accurate as these measures were for accounting for metric variation, a simple demonstration showed that all were insensitive to viewpoint invariant (nonaccidental) differences in shape.


Brain and Language | 2015

Developmental phonagnosia: Neural correlates and a behavioral marker.

Xiaokun Xu; Irving Biederman; Bryan E. Shilowich; Sarah B. Herald; Ori Amir; Naomi E. Allen

A 20-year old female, AN, with no history of neurological events or detectable lesions, was markedly poorer than controls at identifying her most familiar celebrity voices. She was normal at face recognition and in discriminating which of two speakers uttered a particular sentence. She evidences normal fMRI sensitivity for human speech and non-speech sounds. AN, and two other phonagnosics, were unable to imagine the voices of highly familiar individuals. A region in the ventromedial prefrontal cortex (vmPFC) was differentially activated in controls when imagining familiar celebrity voices compared to imagining non-voice sounds. AN evidenced no differential activation in this area, which has been termed a person identity semantic system. Rather than a deficit in the representation of voice-individuating cues, AN may be unable to associate those cues to the identity of a familiar person. In this respect, the deficit in developmental phonagnosia may bear a striking parallel to developmental prosopagnosia.


Vision Research | 2014

Greater sensitivity to nonaccidental than metric shape properties in preschool children.

Ori Amir; Irving Biederman; Sarah B. Herald; Manan P. Shah; Toben H. Mintz

Nonaccidental properties (NAPs) are image properties that are invariant over orientation in depth and allow facile recognition of objects at varied orientations. NAPs are distinguished from metric properties (MPs) that generally vary continuously with changes in orientation in depth. While a number of studies have demonstrated greater sensitivity to NAPs in human adults, pigeons, and macaque IT cells, the few studies that investigated sensitivities in preschool children did not find significantly greater sensitivity to NAPs. However, these studies did not provide a principled measure of the physical image differences for the MP and NAP variations. We assessed sensitivity to NAP vs. MP differences in a nonmatch-to-sample task in which 14 preschool children were instructed to choose which of two shapes was different from a sample shape in a triangular display. Importantly, we scaled the shape differences so that MP and NAP differences were roughly equal (although the MP differences were slightly larger), using the Gabor-Jet model of V1 similarity (Lades & et al., 1993). Mean reaction times (RTs) for every child were shorter when the target shape differed from the sample in a NAP than an MP. The results suggest that preschoolers, like adults, are more sensitive to NAPs, which could explain their ability to rapidly learn new objects, even without observing them from every possible orientation.


Frontiers in Human Neuroscience | 2016

The Neural Correlates of Humor Creativity

Ori Amir; Irving Biederman

Unlike passive humor appreciation, the neural correlates of real-time humor creation have been unexplored. As a case study for creativity, humor generation uniquely affords a reliable assessment of a creative product’s quality with a clear and relatively rapid beginning and end, rendering it amenable to neuroimaging that has the potential for reflecting individual differences in expertise. Professional and amateur “improv” comedians and controls viewed New Yorker cartoon drawings while being scanned. For each drawing, they were instructed to generate either a humorous or a mundane caption. Greater comedic experience was associated with decreased activation in the striatum and medial prefrontal cortex (mPFC), but increased activation in temporal association regions (TMP). Less experienced comedians manifested greater activation of mPFC, reflecting their deliberate search through TMP association space. Professionals, by contrast, tend to reap the fruits of their spontaneous associations with reduced reliance on top-down guided search.


Visual Cognition | 2014

Phonagnosia: A voice homologue to prosopagnosia

Sarah B. Herald; Xiaokun Xu; Irving Biederman; Ori Amir; Bryan E. Shilowich

Phonagnosia is the inability to individuate people on the basis of their voice and is thus a condition parallel to that of prosopagnosia (Van Lancker & Canter, 1982). We report a case of developmental phonagnosia, AN, the second known to scientists (Garrido et al., 2009), with an exploration of the parallels between her and developmental prosopagnosia. In either condition, the individual shows very poor recognition accuracy of familiar faces or voices, and cannot imagine a familiar face or voice. Whereas controls show greater activation of the ventral medial prefrontal cortex (vmPFC) when imagining voices (vs. nonvoice sounds), AN showed no such activation. This same region shows greater activation to well-known faces versus faces that have just been familiarized through repetition in the course of the experiment. Developmental phonagnosia, as well as developmental prosopagnosia, may thus be conditions in which a person identity node (PIN) (Bruce & Young, 1986) cannot be activated through voice or face input. At the time of testing, AN was a 20-year-old female student at the University of Southern California with high cognitive, conversational, social, and face recognition abilities, and no known neurological insults. When tested on a celebrity voice recognition task, however, she was markedly poorer than controls (a number of whom were perfect) at identifying her most familiar celebrity voices. On each trial of the celebrity voice recognition task, participants would view a display of 1 to 4 celebrity headshots with their names and listen to two 7 s voice clips, one of a celebrity and the other a noncelebrity. The clips were carefully chosen not to provide any individuating information in content. Participants then selected the clip they judged to be that of a celebrity and (for 2 or 4 choices)


Computer Animation and Virtual Worlds | 2017

Social influence of humor in virtual human counselor's self-disclosure

Sin-Hwa Kang; David M. Krum; Peter Khooshabeh; Thai Phan; Chien-Yen Chang; Ori Amir; Rebecca Lin

We explored the social influence of humor in a virtual human counselors self‐disclosure while also varying the ethnicity of the virtual counselor. In a 2 × 3 experiment (humor and ethnicity of the virtual human counselor), participants experienced counseling interview interactions via Skype on a smartphone. We measured user responses to and perceptions of the virtual human counselor. The results demonstrate that humor positively affects user responses to and perceptions of a virtual counselor. The results further suggest that matching styles of humor with a virtual counselors ethnicity influences user responses and perceptions. The results offer insight into the effective design and development of realistic and believable virtual human counselors. Furthermore, they illuminate the potential use of humor to enhance self‐disclosure in human–agent interactions.


Communication Methods and Measures | 2018

Extracting Latent Moral Information from Text Narratives: Relevance, Challenges, and Solutions

René Weber; J. Michael Mangus; Richard Huskey; Frederic R. Hopp; Ori Amir; Reid Swanson; Andrew S. Gordon; Peter Khooshabeh; Lindsay Hahn; Ron Tamborini

ABSTRACT Moral Foundations Theory (MFT) and the Model of Intuitive Morality and Exemplars (MIME) contend that moral judgments are built on a universal set of basic moral intuitions. A large body of research has supported many of MFT’s and the MIME’s central hypotheses. Yet, an important prerequisite of this research—the ability to extract latent moral content represented in media stimuli with a reliable procedure—has not been systematically studied. In this article, we subject different extraction procedures to rigorous tests, underscore challenges by identifying a range of reliabilities, develop new reliability test and coding procedures employing computational methods, and provide solutions that maximize the reliability and validity of moral intuition extraction. In six content analytical studies, including a large crowd-based study, we demonstrate that: (1) traditional content analytical approaches lead to rather low reliabilities; (2) variation in coding reliabilities can be predicted by both text features and characteristics of the human coders; and (3) reliability is largely unaffected by the detail of coder training. We show that a coding task with simplified training and a coding technique that treats moral foundations as fast, spontaneous intuitions leads to acceptable inter-rater agreement, and potentially to more valid moral intuition extractions. While this study was motivated by issues related to MFT and MIME research, the methods and findings in this study have implications for extracting latent content from text narratives that go beyond moral information. Accordingly, we provide a tool for researchers interested in applying this new approach in their own work.

Collaboration


Dive into the Ori Amir's collaboration.

Top Co-Authors

Avatar

Irving Biederman

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Xiaokun Xu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Sarah B. Herald

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Bryan E. Shilowich

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jiye G. Kim

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew S. Gordon

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Manan P. Shah

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Reid Swanson

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

René Weber

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge