Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jordan Hashemi is active.

Publication


Featured researches published by Jordan Hashemi.


PLOS Biology | 2014

A tetraploid intermediate precedes aneuploid formation in yeasts exposed to fluconazole.

Benjamin D. Harrison; Jordan Hashemi; Maayan Bibi; Rebecca Pulver; Danny Bavli; Yaakov Nahmias; Melanie Wellington; Guillermo Sapiro; Judith Berman

When exposed to the antifungal drug fluconazole, Candida albicans undergoes abnormal growth, forming three-lobed “trimeras.” These aneuploid trimeras turn out genetically variable progeny with varying numbers of chromosomes, increasing the odds of creating a drug-resistant strain.


international conference on development and learning | 2012

A computer vision approach for the assessment of autism-related behavioral markers

Jordan Hashemi; Thiago Vallin Spina; Mariano Tepper; Amy Esler; Vassilios Morellas; Nikolaos Papanikolopoulos; Guillermo Sapiro

The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated that promote development and improve prognosis. Research on autism spectrum disorder (ASD) suggests behavioral markers can be observed late in the first year of life. Many of these studies involved extensive frame-by-frame video observation and analysis of a childs natural behavior. Although non-intrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are impractical for clinical purposes. Diagnostic measures for ASD are available for infants but are only accurate when used by specialists experienced in early diagnosis. This work is a first milestone in a long-term multidisciplinary project that aims at helping clinicians and general practitioners accomplish this early detection/measurement task automatically. We focus on providing computer vision tools to measure and identify ASD behavioral markers based on components of the Autism Observation Scale for Infants (AOSI). In particular, we develop algorithms to measure three critical AOSI activities that assess visual attention. We augment these AOSI activities with an additional test that analyzes asymmetrical patterns in unsupported gait. The first set of algorithms involves assessing head motion by facial feature tracking, while the gait analysis relies on joint foreground segmentation and 2D body pose estimation in video. We show results that provide insightful knowledge to augment the clinicians behavioral observations obtained from real in-clinic assessments.


Autism Research and Treatment | 2014

Computer Vision Tools for Low-Cost and Noninvasive Measurement of Autism-Related Behaviors in Infants

Jordan Hashemi; Mariano Tepper; Thiago Vallin Spina; Amy Esler; Vassilios Morellas; Nikolaos Papanikolopoulos; Helen L. Egger; Geraldine Dawson; Guillermo Sapiro

The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated which promote development and improve prognosis. Research on autism spectrum disorder (ASD) suggests that behavioral signs can be observed late in the first year of life. Many of these studies involve extensive frame-by-frame video observation and analysis of a childs natural behavior. Although nonintrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are burdensome for clinical and large population research purposes. This work is a first milestone in a long-term project on non-invasive early observation of children in order to aid in risk detection and research of neurodevelopmental disorders. We focus on providing low-cost computer vision tools to measure and identify ASD behavioral signs based on components of the Autism Observation Scale for Infants (AOSI). In particular, we develop algorithms to measure responses to general ASD risk assessment tasks and activities outlined by the AOSI which assess visual attention by tracking facial features. We show results, including comparisons with expert and nonexpert clinicians, which demonstrate that the proposed computer vision tools can capture critical behavioral observations and potentially augment the clinicians behavioral observations obtained from real in-clinic assessments.


The Journal of Pediatrics | 2017

Use of a Digital Modified Checklist for Autism in Toddlers – Revised with Follow-up to Improve Quality of Screening for Autism

Kathleen Campbell; Kimberly L. H. Carpenter; Steven Espinosa; Jordan Hashemi; Qiang Qiu; Mariano Tepper; Robert Calderbank; Guillermo Sapiro; Helen L. Egger; Jeffrey P. Baker; Geraldine Dawson

Objectives To assess changes in quality of care for children at risk for autism spectrum disorders (ASD) due to process improvement and implementation of a digital screening form. Study design The process of screening for ASD was studied in an academic primary care pediatrics clinic before and after implementation of a digital version of the Modified Checklist for Autism in Toddlers – Revised with Follow‐up with automated risk assessment. Quality metrics included accuracy of documentation of screening results and appropriate action for positive screens (secondary screening or referral). Participating physicians completed pre‐ and postintervention surveys to measure changes in attitudes toward feasibility and value of screening for ASD. Evidence of change was evaluated with statistical process control charts and χ2 tests. Results Accurate documentation in the electronic health record of screening results increased from 54% to 92% (38% increase, 95% CI 14%‐64%) and appropriate action for children screening positive increased from 25% to 85% (60% increase, 95% CI 35%‐85%). A total of 90% of participating physicians agreed that the transition to a digital screening form improved their clinical assessment of autism risk. Conclusions Implementation of a tablet‐based digital version of the Modified Checklist for Autism in Toddlers – Revised with Follow‐up led to improved quality of care for children at risk for ASD and increased acceptability of screening for ASD. Continued efforts towards improving the process of screening for ASD could facilitate rapid, early diagnosis of ASD and advance the accuracy of studies of the impact of screening.


Autism | 2018

Computer vision analysis captures atypical attention in toddlers with autism

Kathleen Campbell; Kimberly L. H. Carpenter; Jordan Hashemi; Steven Espinosa; Samuel Marsan; Jana Schaich Borg; Zhuoqing Chang; Qiang Qiu; Saritha Vermeer; Elizabeth Adler; Mariano Tepper; Helen L. Egger; Jeffery Baker; Guillermo Sapiro; Geraldine Dawson

To demonstrate the capability of computer vision analysis to detect atypical orienting and attention behaviors in toddlers with autism spectrum disorder. One hundered and four toddlers of 16–31 months old (mean = 22) participated in this study. Twenty-two of the toddlers had autism spectrum disorder and 82 had typical development or developmental delay. Toddlers watched video stimuli on a tablet while the built-in camera recorded their head movement. Computer vision analysis measured participants’ attention and orienting in response to name calls. Reliability of the computer vision analysis algorithm was tested against a human rater. Differences in behavior were analyzed between the autism spectrum disorder group and the comparison group. Reliability between computer vision analysis and human coding for orienting to name was excellent (intra-class coefficient 0.84, 95% confidence interval 0.67–0.91). Only 8% of toddlers with autism spectrum disorder oriented to name calling on >1 trial, compared to 63% of toddlers in the comparison group (p = 0.002). Mean latency to orient was significantly longer for toddlers with autism spectrum disorder (2.02 vs 1.06 s, p = 0.04). Sensitivity for autism spectrum disorder of atypical orienting was 96% and specificity was 38%. Older toddlers with autism spectrum disorder showed less attention to the videos overall (p = 0.03). Automated coding offers a reliable, quantitative method for detecting atypical social orienting and reduced sustained attention in toddlers with autism spectrum disorder.


international conference on image processing | 2015

Cross-modality pose-invariant facial expression

Jordan Hashemi; Qiang Qiu; Guillermo Sapiro

In this work, we present a dictionary learning based framework for robust, cross-modality, and pose-invariant facial expression recognition. The proposed framework first learns a dictionary that i) contains both 3D shape and morphological information as well as 2D texture and geometric information, ii) enforces coherence across both 2D and 3D modalities and different poses, and iii) is robust in the sense that a learned dictionary can be applied across multiple facial expression datasets. We demonstrate that enforcing domain specific block structures on the dictionary, given a test expression sample, we can transform such sample across different domains for tasks such as pose alignment. We validate our approach on the task of pose-invariant facial expression recognition on the standard BU3D-FE and MultiPie datasets, achieving state of the art performance.


npj Digital Medicine | 2018

Automatic emotion and attention analysis of young children at home: a ResearchKit autism feasibility study

Helen L. Egger; Geraldine Dawson; Jordan Hashemi; Kimberly L. H. Carpenter; Steven Espinosa; Kathleen Campbell; Samuel Brotkin; Jana Schaich-Borg; Qiang Qiu; Mariano Tepper; Jeffrey P. Baker; Richard A. Bloomfield; Guillermo Sapiro

Current tools for objectively measuring young children’s observed behaviors are expensive, time-consuming, and require extensive training and professional administration. The lack of scalable, reliable, and validated tools impacts access to evidence-based knowledge and limits our capacity to collect population-level data in non-clinical settings. To address this gap, we developed mobile technology to collect videos of young children while they watched movies designed to elicit autism-related behaviors and then used automatic behavioral coding of these videos to quantify children’s emotions and behaviors. We present results from our iPhone study Autism & Beyond, built on ResearchKit’s open-source platform. The entire study—from an e-Consent process to stimuli presentation and data collection—was conducted within an iPhone-based app available in the Apple Store. Over 1 year, 1756 families with children aged 12–72 months old participated in the study, completing 5618 caregiver-reported surveys and uploading 4441 videos recorded in the child’s natural settings. Usable data were collected on 87.6% of the uploaded videos. Automatic coding identified significant differences in emotion and attention by age, sex, and autism risk status. This study demonstrates the acceptability of an app-based tool to caregivers, their willingness to upload videos of their children, the feasibility of caregiver-collected data in the home, and the application of automatic behavioral encoding to quantify emotions and attention variables that are clinically meaningful and may be refined to screen children for autism and developmental disorders outside of clinical settings. This technology has the potential to transform how we screen and monitor children’s development.Mobile technologies: revolutionizing behavioral assessments of young childrenA phone-based app that assesses the behavior of young children in their homes can be used to determine their risk of autism. Autism is the most common neurodevelopmental disorder in the US. Although some of the signs of autism can be identified within the first months of life, many children wait years to be diagnosed. A study led by Richard Bloomfield, Geraldine Dawson, Helen Egger, and Guillermo Sapiro, Duke University, examined caregiver-collected smartphone videos that quantify the emotion and attention of children aged between 1 and 6 years watching movies known to elicit autism-related behaviors. Automatic coding identified significant differences in emotion and attention by age, sex, and autism risk status. These findings highlight the feasibility of using this approach to identify autism symptoms and, potentially, those of other behavioral disorders.


PLOS ONE | 2018

Motivational valence alters memory formation without altering exploration of a real-life spatial environment

Kimberly S. Chiew; Jordan Hashemi; Lee K. Gans; Laura Lerebours; Nathaniel J. Clement; Mai-Anh T. Vu; Guillermo Sapiro; Nicole E. Heller; R. Alison Adcock

Volitional exploration and learning are key to adaptive behavior, yet their characterization remains a complex problem for cognitive science. Exploration has been posited as a mechanism by which motivation promotes memory, but this relationship is not well-understood, in part because novel stimuli that motivate exploration also reliably elicit changes in neuromodulatory brain systems that directly alter memory formation, via effects on neural plasticity. To deconfound interrelationships between motivation, exploration, and memory formation we manipulated motivational state prior to entering a spatial context, measured exploratory responses to the context and novel stimuli within it, and then examined motivation and exploration as predictors of memory outcomes. To elicit spontaneous exploration, we used the physical space of an art exhibit with affectively rich content; we expected motivated exploration and memory to reflect multiple factors, including not only motivational valence, but also individual differences. Motivation was manipulated via an introductory statement framing exhibit themes in terms of Promotion- or Prevention-oriented goals. Participants explored the exhibit while being tracked by video. They returned 24 hours later for recall and spatial memory tests, followed by measures of motivation, personality, and relevant attitude variables. Promotion and Prevention condition participants did not differ in terms of group-level exploration time or memory metrics, suggesting similar motivation to explore under both framing contexts. However, exploratory behavior and memory outcomes were significantly more closely related under Promotion than Prevention, indicating that Prevention framing disrupted expected depth-of-encoding effects. Additionally, while trait measures predicted exploration similarly across framing conditions, traits interacted with motivational framing context and facial affect to predict memory outcomes. This novel characterization of motivated learning implies that dissociable behavioral and biological mechanisms, here varying as a function of valence, contribute to memory outcomes in complex, real-life environments.


arXiv: Computer Vision and Pattern Recognition | 2015

Computer vision tools for the non-invasive assessment of autism-related behavioral markers

Jordan Hashemi; Thiago Vallin Spina; Mariano Tepper; Amy Esler; Morellas; Nikolaos Papanikolopoulos; Guillermo Sapiro


international conference on wireless mobile communication and healthcare | 2015

A scalable app for measuring autism risk behaviors in young children: A technical validity and feasibility study

Jordan Hashemi; Kathleen Campbell; Kimberly L. H. Carpenter; Adrianne Harris; Qiang Qiu; Mariano Tepper; Steven Espinosa; Jana Schaich Borg; Samuel Marsan; Robert Calderbank; Jeffery Baker; Helen L. Egger; Geraldine Dawson; Guillermo Sapiro

Collaboration


Dive into the Jordan Hashemi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amy Esler

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge