Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jill Boberg is active.

Publication


Featured researches published by Jill Boberg.


ieee international conference on automatic face gesture recognition | 2013

Automatic behavior descriptors for psychological disorder analysis

Stefan Scherer; Giota Stratou; Marwa Mahmoud; Jill Boberg; Jonathan Gratch; Albert A. Rizzo; Louis-Philippe Morency

We investigate the capabilities of automatic nonverbal behavior descriptors to identify indicators of psychological disorders such as depression, anxiety, and post-traumatic stress disorder. We seek to confirm and enrich present state of the art, predominantly based on qualitative manual annotations, with automatic quantitative behavior descriptors. In this paper, we propose four nonverbal behavior descriptors that can be automatically estimated from visual signals. We introduce a new dataset called the Distress Assessment Interview Corpus (DAIC) which includes 167 dyadic interactions between a confederate interviewer and a paid participant. Our evaluation on this dataset shows correlation of our automatic behavior descriptors with specific psychological disorders as well as a generic distress measure. Our analysis also includes a deeper study of self-adaptor and fidgeting behaviors based on detailed annotations of where these behaviors occur.


Image and Vision Computing | 2014

Automatic Audiovisual Behavior Descriptors for Psychological Disorder Analysis

Stefan Scherer; Giota Stratou; Gale M. Lucas; Marwa Mahmoud; Jill Boberg; Jonathan Gratch; Albert A. Rizzo; Louis-Philippe Morency

Abstract We investigate the capabilities of automatic audiovisual nonverbal behavior descriptors to identify indicators of psychological disorders such as depression, anxiety, and post-traumatic stress disorder. Due to strong correlations between these disordersas measured with standard self-assessment questionnaires in this study, we focus our investigations in particular on a generic distress measure as identified using factor analysis. Within this work, we seek to confirm and enrich present state of the art, predominantly based on qualitative manual annotations, with automatic quantitative behavior descriptors. We propose a number of nonverbal behavior descriptors that can be automatically estimated from audiovisual signals. Such automatic behavior descriptors could be used to support healthcare providers with quantified and objective observations that could ultimately improve clinical assessment. We evaluate our work on the dataset called the Distress Assessment Interview Corpus (DAIC) which comprises dyadic interactions between a confederate interviewer and a paid participant. Our evaluation on this dataset shows correlation of our automatic behavior descriptors with the derived general distress measure. Our analysis also includes a deeper study of self-adaptor and fidgeting behaviors based on detailed annotations of where these behaviors occur.


motion in games | 2016

The effect of operating a virtual doppleganger in a 3D simulation

Gale M. Lucas; Evan Szablowski; Jonathan Gratch; Andrew W. Feng; Tiffany Huang; Jill Boberg; Ari Shapiro

Recent advances in scanning technology have enabled the widespread capture of 3D character models based on human subjects. Intuition suggests that, with these new capabilities to create avatars that look like their users, every player should have his or her own avatar to play video games or simulations. We explicitly test the impact of having ones own avatar (vs. a yoked control avatar) in a simulation (i.e., maze running task with mines). We test the impact of avatar identity on both subjective (e.g., feeling connected and engaged, liking avatars appearance, feeling upset when avatars injured, enjoying the game) and behavioral variables (e.g., time to complete task, speed, number of mines triggered, riskiness of maze path chosen). Results indicate that having an avatar that looks like the user improves their subjective experience, but there is no significant effect on how users perform in the simulation.


Frontiers in Robotics and AI | 2017

Reporting Mental Health Symptoms: Breaking Down Barriers to Care with Virtual Human Interviewers

Gale M. Lucas; Albert Rizzo; Jonathan Gratch; Stefan Scherer; Giota Stratou; Jill Boberg; Louis-Philippe Morency

A common barrier to healthcare for psychiatric conditions is the stigma associated with these disorders. Perceived stigma prevents many from reporting their symptoms. Stigma is a particularly pervasive problem among military service members, preventing them from reporting symptoms of combat-related conditions like posttraumatic stress disorder (PTSD). However, research shows increased reporting by service members when anonymous assessments are used. For example, service members report more symptoms of PTSD when they anonymously answer the Post-Deployment Health Assessment (PDHA) symptom checklist compared to the official PDHA, which is identifiable and linked to their military records. To investigate the factors that influence reporting of psychological symptoms by service members, we used a transformative technology: automated virtual humans that interview people about their symptoms. Such virtual human interviewers allow simultaneous use of two techniques for eliciting disclosure that would otherwise be incompatible; they afford anonymity while also building rapport. We examined whether virtual human interviewers could increase disclosure of mental health symptoms among active-duty service members that just returned from a year-long deployment in Afghanistan. Service members reported more symptoms during a conversation with a virtual human interviewer than on the official PDHA. They also reported more to a virtual human interviewer than on an anonymized PDHA. A second, larger sample of active-duty and former service members found a similar effect that approached statistical significance. Because respondents in both studies shared more with virtual human interviewers than an anonymized PDHA -even though both conditions control for stigma and ramifications for service members’ military records- virtual human interviewers that build rapport may provide a superior option to encourage reporting.


intelligent virtual agents | 2016

Do Avatars that Look Like Their Users Improve Performance in a Simulation

Gale M. Lucas; Evan Szablowski; Jonathan Gratch; Andrew W. Feng; Tiffany Huang; Jill Boberg; Ari Shapiro

Recent advances in scanning technology have enabled the widespread capture of 3D character models based on human subjects. Intuition suggests that, with these new capabilities to create avatars that look like their users, every player should have his or her own avatar to play videogames or simulations. We explicitly test the impact of having one’s own avatar (vs. a yoked control avatar) in a simulation (i.e., maze running task with mines). We test the impact of avatar identity on both subjective (e.g., feeling connected and engaged, liking avatar’s appearance, feeling upset when avatar’s injured, enjoying the game) and behavioral variables (e.g., time to complete task, speed, number of mines triggered, riskiness of maze path chosen). Results indicate that having an avatar that looks like the user improves their subjective experience, but there is no significant effect on how users behave in the simulation.


human-robot interaction | 2018

Getting to Know Each Other: The Role of Social Dialogue in Recovery from Errors in Social Robots

Gale M. Lucas; Jill Boberg; David R. Traum; Ron Artstein; Jonathan Gratch; Alesia Gainer; Emmanuel Johnson; Anton Leuski; Mikio Nakano

This work explores the extent to which social dialogue can mitigate (or exacerbate) the loss of trust caused when robots make conversational errors. Our study uses a NAO robot programmed to persuade users to agree with its rankings on two tasks. We perform two manipulations: (1) The timing of conversational errors - the robot exhibited errors either in the first task, the second task, or neither; (2) The presence of social dialogue - between the two tasks, users either engaged in a social dialogue with the robot or completed a control task. We found that the timing of the errors matters: replicating previous research, conversational errors reduce the robots influence in the second task, but not on the first task. Social dialogue interacts with the timing of errors, acting as an intensifier: social dialogue helps the robot recover from prior errors, and actually boosts subsequent influence; but social dialogue backfires if it is followed by errors, because it extends the period of good performance, creating a stronger contrast effect with the subsequent errors. The design of social robots should therefore be more careful to avoid errors after periods of good performance than early on in a dialogue.


international conference on multimodal interfaces | 2016

Niki and Julie: a robot and virtual human for studying multimodal social interaction

Ron Artstein; David R. Traum; Jill Boberg; Alesia Gainer; Jonathan Gratch; Emmanuel Johnson; Anton Leuski; Mikio Nakano

We demonstrate two agents, a robot and a virtual human, which can be used for studying factors that impact social influence. The agents engage in dialogue scenarios that build familiarity, share information, and attempt to influence a human participant. The scenarios are variants of the classical “survival task,” where members of a team rank the importance of a number of items (e.g., items that might help one survive a crash in the desert). These are ranked individually and then re-ranked following a team discussion, and the difference in ranking provides an objective measure of social influence. Survival tasks have been used in psychology, virtual human research, and human-robot interaction. Our agents are operated in a “Wizard-of-Oz” fashion, where a hidden human operator chooses the agents’ dialogue actions while interacting with an experiment participant.


language resources and evaluation | 2014

The Distress Analysis Interview Corpus of human and computer interviews

Jonathan Gratch; Ron Artstein; Gale M. Lucas; Giota Stratou; Stefan Scherer; Angela Nazarian; Rachel Wood; Jill Boberg; David DeVault; Stacy Marsella; David R. Traum; Albert A. Rizzo; Louis-Philippe Morency


medicine meets virtual reality | 2013

User-state sensing for virtual health agents and telehealth applications.

Jonathan Gratch; Louis-Philippe Morency; Stefan Scherer; Giota Stratou; Jill Boberg; Sebastian Koenig; Todd Adamson; Albert A. Rizzo


Journal of Pain Management | 2016

Detection and computational analysis of psychological signals using a virtual human interviewing agent

Albert A. Rizzo; Scherer Scherer; David DeVault; Jonathan Gratch; Ronald Artstein; Arno Hartholt; Gale M. Lucas; Stacy Marsella; Fabrizio Morbini; Angela Nazarian; Giota Stratou; David R. Traum; Rachel Wood; Jill Boberg; Louis-Philippe Morency

Collaboration


Dive into the Jill Boberg's collaboration.

Top Co-Authors

Avatar

Jonathan Gratch

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Gale M. Lucas

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

David R. Traum

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Giota Stratou

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ron Artstein

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Stefan Scherer

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Albert A. Rizzo

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Alesia Gainer

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Anton Leuski

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge