Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Darren Gergle is active.

Publication


Featured researches published by Darren Gergle.


human factors in computing systems | 2002

Effects of four computer-mediated communications channels on trust development

Nathan Bos; Judith S. Olson; Darren Gergle; Gary M. Olson; Zach Wright

When virtual teams need to establish trust at a distance, it is advantageous for them to use rich media to communicate. We studied the emergence of trust in a social dilemma game in four different communication situations: face-to-face, video, audio, and text chat. All three of the richer conditions were significant improvements over text chat. Video and audio conferencing groups were nearly as good as face-to-face, but both did show some evidence of what we term delayed trust (slower progress toward full cooperation) and fragile trust (vulnerability to opportunistic behavior)


Journal of Medical Internet Research | 2011

Harnessing Context Sensing to Develop a Mobile Intervention for Depression

Michelle Nicole Burns; Mark Begale; Jennifer Duffecy; Darren Gergle; Chris J Karr; Emily Giangrande; David C. Mohr

Background Mobile phone sensors can be used to develop context-aware systems that automatically detect when patients require assistance. Mobile phones can also provide ecological momentary interventions that deliver tailored assistance during problematic situations. However, such approaches have not yet been used to treat major depressive disorder. Objective The purpose of this study was to investigate the technical feasibility, functional reliability, and patient satisfaction with Mobilyze!, a mobile phone- and Internet-based intervention including ecological momentary intervention and context sensing. Methods We developed a mobile phone application and supporting architecture, in which machine learning models (ie, learners) predicted patients’ mood, emotions, cognitive/motivational states, activities, environmental context, and social context based on at least 38 concurrent phone sensor values (eg, global positioning system, ambient light, recent calls). The website included feedback graphs illustrating correlations between patients’ self-reported states, as well as didactics and tools teaching patients behavioral activation concepts. Brief telephone calls and emails with a clinician were used to promote adherence. We enrolled 8 adults with major depressive disorder in a single-arm pilot study to receive Mobilyze! and complete clinical assessments for 8 weeks. Results Promising accuracy rates (60% to 91%) were achieved by learners predicting categorical contextual states (eg, location). For states rated on scales (eg, mood), predictive capability was poor. Participants were satisfied with the phone application and improved significantly on self-reported depressive symptoms (betaweek = –.82, P < .001, per-protocol Cohen d = 3.43) and interview measures of depressive symptoms (betaweek = –.81, P < .001, per-protocol Cohen d = 3.55). Participants also became less likely to meet criteria for major depressive disorder diagnosis (bweek = –.65, P = .03, per-protocol remission rate = 85.71%). Comorbid anxiety symptoms also decreased (betaweek = –.71, P < .001, per-protocol Cohen d = 2.58). Conclusions Mobilyze! is a scalable, feasible intervention with preliminary evidence of efficacy. To our knowledge, it is the first ecological momentary intervention for unipolar depression, as well as one of the first attempts to use context sensing to identify mental health-related states. Several lessons learned regarding technical functionality, data mining, and software development process are discussed. Trial Registration Clinicaltrials.gov NCT01107041; http://clinicaltrials.gov/ct2/show/NCT01107041 (Archived by WebCite at http://www.webcitation.org/60CVjPH0n)


ACM Transactions on Computer-Human Interaction | 2006

Physically large displays improve performance on spatial tasks

Desney S. Tan; Darren Gergle; Peter Scupelli; Randy Pausch

Large wall-sized displays are becoming prevalent. Although researchers have articulated qualitative benefits of group work on large displays, little work has been done to quantify the benefits for individual users. In this article we present four experiments comparing the performance of users working on a large projected wall display to that of users working on a standard desktop monitor. In these experiments, we held the visual angle constant by adjusting the viewing distance to each of the displays. Results from the first two experiments suggest that physically large displays, even when viewed at identical visual angles as smaller ones, help users perform better on mental rotation tasks. We show through the experiments how these results may be attributed, at least in part, to large displays immersing users within the problem space and biasing them into using more efficient cognitive strategies. In the latter two experiments, we extend these results, showing the presence of these effects with more complex tasks, such as 3D navigation and mental map formation and memory. Results further show that the effects of physical display size are independent of other factors that may induce immersion, such as interactivity and mental aids within the virtual environments. We conclude with a general discussion of the findings and possibilities for future work.


conference on computer supported cooperative work | 2002

The use of visual information in shared visual spaces: informing the development of virtual co-presence

Robert E. Kraut; Darren Gergle; Susan R. Fussell

A shared visual workspace is one where multiple people can see the same objects at roughly the same time. We present findings from an experiment investigating the effects of shared visual space on a collaborative puzzle task. We show that having the shared visual space helps collaborators understand the current state of their task and enables them to communicate and ground their conversations efficiently. These processes are associated with faster and better task performance. Delaying the visual update in the space reduces benefits and degrades performance. The shared visual space is more useful when tasks are visually complex or when actors have no simple vocabulary for describing their world. We find evidence for the ways in which participants adapt their discourse processes to their level of shared visual information.


human factors in computing systems | 2003

With similar visual angles, larger displays improve spatial performance

Desney S. Tan; Darren Gergle; Peter Scupelli; Randy Pausch

Large wall-sized displays are becoming prevalent. Although researchers have articulated qualitative benefits of group work on large displays, little work has been done to quantify the benefits for individual users. We ran two studies comparing the performance of users working on a large projected wall display to that of users working on a standard desktop monitor. In these studies, we held the visual angle constant by adjusting the viewing distance to each of the displays. Results from the first study indicate that although there was no significant difference in performance on a reading comprehension task, users performed about 26% better on a spatial orientation task done on the large display. Results from the second study suggest that the large display affords a greater sense of presence, allowing users to treat the spatial task as an egocentric rather than an exocentric rotation. We discuss future work to extend our findings and formulate design principles for computer interfaces and physical workspaces.


conference on computer supported cooperative work | 2004

Action as language in a shared visual space

Darren Gergle; Robert E. Kraut; Susan R. Fussell

A shared visual workspace allows multiple people to see similar views of objects and environments. Prior empirical literature demonstrates that visual information helps collaborators understand the current state of their task and enables them to communicate and ground their conversations efficiently. We present an empirical study that demonstrates how action replaces explicit verbal instruction in a shared visual workspace. Pairs performed a referential communication task with and without a shared visual space. A detailed sequential analysis of the communicative content reveals that pairs with a shared workspace were less likely to explicitly verify their actions with speech. Rather, they relied on visual information to provide the necessary communicative and coordinative cues.


Journal of Language and Social Psychology | 2004

Language Efficiency and Visual Technology Minimizing Collaborative Effort with Visual Information

Darren Gergle; Robert E. Kraut; Susan R. Fussell

When collaborators work on a physical task, seeing a common workspace transforms their language use and reduces their overall collaborative effort. This article shows how visual information can make communication more efficient. In an experiment, dyads collaborated on building a puzzle. They communicated without a shared visual space, using a shared space featuring immediately updated visual information, and using a shared space featuring delayed visual updating. Having the shared visual space helps collaborators understand the current state of their task and enables them to ground their conversations efficiently, as seen in the ways in which participants adapted their discourse processes to their level of shared visual information. These processes are associated with faster and better task performance. Delaying the visual update reduces benefits and degrades performance. The shared visual space is more useful when tasks are visually complex or when participants have no simple vocabulary for describing their environments.


human factors in computing systems | 2004

Physically large displays improve path integration in 3D virtual navigation tasks

Desney S. Tan; Darren Gergle; Peter Scupelli; Randy Pausch

Previous results have shown that users perform better on spatial orientation tasks involving static 2D scenes when working on physically large displays as compared to small ones. This was found to be true even when the displays presented the same images at equivalent visual angles. Further investigation has suggested that large displays may provide a greater sense of presence, which biases users into adopting more efficient strategies to perform tasks. In this work, we extend those findings, demonstrating that users are more effective at performing 3D virtual navigation tasks on large displays. We also show that even though interacting with the environment affects performance, effects induced by interactivity are independent of those induced by physical display size. Together, these findings allow us to derive guidelines for the design and presentation of interactive 3D environments on physically large displays.


Human-Computer Interaction | 2012

Using Visual Information for Grounding and Awareness in Collaborative Tasks

Darren Gergle; Robert E. Kraut; Susan R. Fussell

When pairs work together on a physical task, seeing a common workspace facilitates communication and benefits performance. When mediating such activities, however, the choice of technology can transform the visual information in ways that impact critical coordination processes. In this article we examine two coordination processes that are impacted by visual information—situation awareness and conversational grounding—which are theoretically distinct but often confounded in empirical research. We present three empirical studies that demonstrate how shared visual information supports collaboration through these two distinct routes. We also address how particular features of visual information interact with features of the task to influence situation awareness and conversational grounding, and further demonstrate how these features affect conversation and coordination. Experiment 1 manipulates the immediacy of the visual information and shows that immediate visual feedback facilitates collaboration by improving both situation awareness and conversational grounding. In Experiment 2, by misaligning the perspective through which the Worker and Helper see the work area we disrupt the ability of visual feedback to support conversational grounding but not situation awareness. The findings demonstrate that visual information supports the central mechanism of conversational grounding. Experiment 3 disrupts the ability of visual feedback to support situation awareness by reducing the size of the common viewing area. The findings suggest that visual information independently supports both situation awareness and conversational grounding. We conclude with a general discussion of the results and their implications for theory development and the future design of collaborative technologies.


conference on computer supported cooperative work | 2008

The language of emotion in short blog texts

Alastair J. Gill; Robert M. French; Darren Gergle; Jon Oberlander

Emotion is central to human interactions, and automatic detection could enhance our experience with technologies. We investigate the linguistic expression of fine-grained emotion in 50 and 200 word samples of real blog texts previously coded by expert and naive raters. Content analysis (LIWC) reveals angry authors use more affective language and negative affect words, and that joyful authors use more positive affect words. Additionally, a co-occurrence semantic space approach (LSA) was able to identify fear (which naive human emotion raters could not do). We relate our findings to human emotion perception and note potential computational applications.

Collaboration


Dive into the Darren Gergle's collaboration.

Top Co-Authors

Avatar

Tom Brinck

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert E. Kraut

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge