Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David A. Huffaker is active.

Publication


Featured researches published by David A. Huffaker.


human factors in computing systems | 2012

Talking in circles: selective sharing in google+

Sanjay Kairam; Mike Brzozowski; David A. Huffaker; Ed H. Chi

Online social networks have become indispensable tools for information sharing, but existing all-or-nothing models for sharing have made it difficult for users to target information to specific parts of their networks. In this paper, we study Google+, which enables users to selectively share content with specific Circles of people. Through a combination of log analysis with surveys and interviews, we investigate how active users organize and select audiences for shared content. We find that these users frequently engaged in selective sharing, creating circles to manage content across particular life facets, ties of varying strength, and interest-based groups. Motivations to share spanned personal and informational reasons, and users frequently weighed limiting factors (e.g. privacy, relevance, and social norms) against the desire to reach a large audience. Our work identifies implications for the design of selective sharing mechanisms in social networks.


symposium on usable privacy and security | 2012

Are privacy concerns a turn-off?: engagement and privacy in social networks

Jessica Staddon; David A. Huffaker; Larkin Brown; Aaron Sedley

We describe survey results from a representative sample of 1,075 U. S. social network users who use Facebook as their primary network. Our results show a strong association between low engagement and privacy concern. Specifically, users who report concerns around sharing control, comprehension of sharing practices or general Facebook privacy concern, also report consistently less time spent as well as less (self-reported) posting, commenting and Likeing of content. The limited evidence of other significant differences between engaged users and others suggests that privacy-related concerns may be an important gate to engagement. Indeed, privacy concern and network size are the only malleable attributes that we find to have significant association with engagement. We manually categorize the privacy concerns finding that many are nonspecific and not associated with negative personal experiences. Finally, we identify some education and utility issues associated with low social network activity, suggesting avenues for increasing engagement amongst current users.


international symposium on wearable computers | 2015

Mobile health apps: adoption, adherence, and abandonment

Elizabeth L. Murnane; David A. Huffaker; Gueorgi Kossinets

A myriad of mobile technologies purport to help individuals change or maintain health-related behaviors, for instance by increasing motivation or self-awareness. We provide a fine-grained categorization of popular mobile health applications and also examine the perceived efficacy of apps along with reasons underlying both app adoption and abandonment. Our findings bear implications for future tools designed to support health management.


conference on information and knowledge management | 2013

Instant foodie: predicting expert ratings from grassroots

Chenhao Tan; Ed H. Chi; David A. Huffaker; Gueorgi Kossinets; Alexander J. Smola

Consumer review sites and recommender systems typically rely on a large volume of user-contributed ratings, which makes rating acquisition an essential component in the design of such systems. User ratings are then summarized to provide an aggregate score representing a popular evaluation of an item. An inherent problem in such summarization is potential bias due to raters self-selection and heterogeneity in terms of experience, tastes and rating scale interpretation. There are two major approaches to collecting ratings, which have different advantages and disadvantages. One is to allow a large number of volunteers to choose and rate items directly (a method employed by e.g. Yelp and Google Places). Alternatively, a panel of raters may be maintained and invited to rate a predefined set of items at regular intervals (such as in Zagat Survey). The latter approach arguably results in more consistent reviews and reduced selection bias, however, at the expense of much smaller coverage (fewer rated items). In this paper, we examine the two different approaches to collecting user ratings of restaurants and explore the question of whether it is possible to reconcile them. Specifically, we study the problem of inferring the more calibrated Zagat Survey ratings (which we dub expert ratings) from the user-generated ratings (grassroots) in Google Places. To that effect, we employ latent factor models and provide a probabilistic treatment of the ordinal rankings. We can predict Zagat Survey ratings accurately from ad hoc user-generated ratings by joint optimization on two datasets. We analyze the resulting model, and find that users become more discerning as they submit more ratings. We also describe an approach towards cross-city recommendations, answering questions such as What is the equivalent of the Per Se restaurant in Chicago?


human factors in computing systems | 2014

Online microsurveys for user experience research

Victoria Schwanda-Sosik; Elie Bursztein; Sunny Consolvo; David A. Huffaker; Gueorgi Kossinets; Kerwell Liao; Paul Morell McDonald; Aaron Sedley

This case study presents a critical analysis of microsurveys as a method for conducting user experience research. We focus specifically on Google Consumer Surveys (GCS) and analyze a combination of log data and GCSs run by the authors to investigate how they are used, who the respondents are, and the quality of the data. We find that such microsurveys can be a great way to quickly and cheaply gather large amounts of survey data, but that there are pitfalls that user experience researchers should be aware of when using the method.


international conference on human-computer interaction | 2015

Not some trumped up beef: Assessing Credibility of Online Restaurant Reviews

Marina Kobayashi; Victoria Schwanda Sosik; David A. Huffaker

Online reviews, or electronic word of mouth (eWOM), are an essential source of information for people making decisions about products and services, however they are also susceptible to abuses such as spamming and defamation. Therefore when making decisions, readers must determine if reviews are credible. Yet relatively little research has investigated how people make credibility judgments of online reviews. This paper presents quantitative and qualitative results from a survey of 1,979 respondents, showing that attributes of the reviewer and review content influence credibility ratings. Especially important for judging credibility is the level of detail in the review, whether or not it is balanced in sentiment, and whether the reviewer demonstrates expertise. Our findings contribute to the understanding of how people judge eWOM credibility, and we suggest how eWOM platforms can be designed to coach reviewers to write better reviews and present reviews in a manner that facilitates credibility judgments.


Archive | 2012

System and Method for Group Recommendation of Objects Using User Comparisons of Object Characteristics

Scott Golder; Ed H. Chi; David A. Huffaker; Gueorgi Kossinets


Archive | 2013

SYSTEM AND METHOD TO CATEGORIZE USERS

David A. Huffaker; Makoto Uchida; Abhijit Bose; Rachel Ida Rosenthal Schutt; Zachary Yeskel


international conference on weblogs and social media | 2012

Around the Water Cooler: Shared Discussion Topics and Contact Closeness in Social Search

Saranga Komanduri; Lujun Fang; David A. Huffaker; Jessica Staddon


Archive | 2012

Automated objective-based feature improvement

Zach Yeskel; David A. Huffaker; Rachel Ida Rosenthal Schutt; Andrew Tomkins; David Gibson; Abhijit Bose; Alexander Fabrikant; Makoto Uchida

Collaboration


Dive into the David A. Huffaker's collaboration.

Researchain Logo
Decentralizing Knowledge