Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Erik Choi is active.

Publication


Featured researches published by Erik Choi.


Proceedings of the American Society for Information Science and Technology | 2012

Developing a typology of online Q&A models and recommending the right model for each question type

Erik Choi; Vanessa Kitzie; Chirag Shah

Although online Q&A services have increased in popularity, the field lacks a comprehensive typology to classify different kinds of services into model types. This poster categorizes online Q&A services into four model types – community-based, collaborative, expert-based, and social. Drawing such a distinction between online Q&A models provides an overview for how these different types of online Q&A models differ from each other and suggests implications for mitigating weaknesses and bolstering strengths of each model based on the types of questions that are addressed within each. To demonstrate differences among these models an appropriate service was selected for each of them. Then, 500 questions were collected and analyzed for each of these services to classify question types into four categories – information-seeking, advice-seeking, opinion-seeking, and non-information seeking. The findings suggest that information-seeking questions appear to be more suitable in either a collaborative Q&A environment or an expert-based Q&A environment, while opinion-seeking questions are more common in community-based Q&A. Social Q&A, on the other hand, provides an active forum for either seeking personal advice or seeking non-information related to either self-expression or self-promotion.


association for information science and technology | 2016

User motivations for asking questions in online Q&A services

Erik Choi; Chirag Shah

Online Q&A services are information sources where people identify their information need, formulate the need in natural language, and interact with one another to satisfy their needs. Even though in recent years online Q&A has considerably grown in popularity and impacted information‐seeking behaviors, we still lack knowledge about what motivates people to ask a question in online Q&A environments. Yahoo! Answers and WikiAnswers were selected as the test beds in the study, and a sequential mixed method employing an Internet‐based survey, a diary method, and interviews was used to investigate user motivations for asking a question in online Q&A services. Cognitive needs were found as the most significant motivation, driving people to ask a question. Yet, it was found that other motivational factors (e.g., tension free needs) also played an important role in user motivations for asking a question, depending on askers contexts and situations. Understanding motivations for asking a question could provide a general framework of conceptualizing different contexts and situations of information needs in online Q&A. The findings have several implications not only for developing better question‐answering processes in online Q&A environments, but also for gaining insights into the broader understanding of online information‐seeking behaviors.


Proceedings of the American Society for Information Science and Technology | 2012

“How much change do you get from 40

Chirag Shah; Marie L. Radford; Lynn Silipigni Connaway; Erik Choi; Vanessa Kitzie

Online question-answering (Q&A) services are becoming increasingly popular among information seekers. While online Q&A services encompass both virtual reference service (VRS) and social Q&A (SQA), SQA services, such as Yahoo! Answers and WikiAnswers, have experienced more success in reaching out to the masses and leveraging subsequent participation. However, the large volume of content on some of the more popular SQA sites renders participants unable to answer some posted questions adequately or even at all. To reduce this latter category of questions that do not receive an answer, the current paper explores reasons for why fact-based questions fail on a specific Q&A service. For this exploration and analyses, thousands of failed questions were collected from Yahoo! Answers extracting only those that were fact-based, information-seeking questions, while opinion/advice-seeking questions were discarded. A typology was then created to code reasons of failures for these questions using a grounded theory approach. Using this typology, suggestions are proposed for how the questions could be restructured or redirected to another Q&A service (possibly a VRS), so users would have a better chance of receiving an answer.


Journal of Information Science | 2014

?” – Analyzing and addressing failed questions on social Q&A

Chirag Shah; Vanessa Kitzie; Erik Choi

With the advent of ubiquitous connectivity and a constant flux of user-generated content, people’s online information-seeking behaviours are rapidly changing, one o f which includes seeking information from peers through online questioning. Ways to understand this new behaviour can be broken down into three aspects, also referred to as the three M’s – the modalities (sources and strategies) that people use when asking their questions online, their motivations behind asking these questions and choosing specific services, and the types and quality of the materials (content) generated in such an online Q&A environment. This article will provide a new framework – three M’s – based on the synthesis of relevant literature. It will then identify some of the gaps in our knowledge about online Q&A based on this framework. These gaps will be transformed into six research questions, stemming from the three M’s, and addressed by (a) consolidating and synthesizing findings previously reported in the literature, (b) conducting new analyses of data used in prior work, and (c) administering a new study to answer questions unaddressed by the pre-existing and new analyses of prior work.


hawaii international conference on system sciences | 2014

Modalities, motivations, and materials - investigating traditional and social online Q&A services

Chirag Shah; Vanessa Kitzie; Erik Choi

In this paper, we investigate question quality among questions posted in Yahoo! Answers to assess what factors contribute to the goodness of a question and determine if we can flag poor quality questions. Using human assessments of whether a question is good or bad and extracted textual features from the questions, we built an SVM classifier that performed with relatively good classification accuracy for both good and bad questions. We then enhanced the performance of this classifier by using additional human assessments of question type as well as additional question features to first separate questions by type and then classify them. This two-step classifier improved the performance of the original classifier in identifying Type II errors and suggests that our model presents a novel approach for identifying bad questions with implications for query revision and routing.


acm/ieee joint conference on digital libraries | 2016

Questioning the Question -- Addressing the Answerability of Questions in Community Question-Answering

Long T. Le; Chirag Shah; Erik Choi

Community Question-Answering (CQA), where questions and answers are generated by peers, has become a popular method of information seeking in online environments. While the content repositories created through CQA sites have been used widely to support general purpose tasks, using them as online digital libraries that support educational needs is an emerging practice. Horizontal CQA services, such as Yahoo! Answers, and vertical CQA services, such as Brainly, are aiming to help students improve their learning process by answering their educational questions. In these services, receiving high quality answer(s) to a question is a critical factor not only for user satisfaction, but also for supporting learning. However, the questions are not necessarily answered by experts, and the askers may not have enough knowledge and skill to evaluate the quality of the answers they receive. This could be problematic when students build their own knowledge base by applying inaccurate information or knowledge acquired from online sources. Using moderators could alleviate this problem. However, a moderators evaluation of answer quality may be inconsistent because it is based on their subjective assessments. Employing human assessors may also be insufficient due to the large amount of content available on a CQA site. To address these issues, we propose a framework for automatically assessing the quality of answers. This is achieved by integrating different groups of features - personal, community-based, textual, and contextual - to build a classification model and determine what constitutes answer quality. To test this evaluation framework, we collected more than 10 million educational answers posted by more than 3 million users on Brainlys United States and Poland sites. The experiments conducted on these datasets show that the model using Random Forest (RF) achieves more than 83% accuracy in identifying high quality of answers. In addition, the findings indicate that personal and community-based features have more prediction power in assessing answer quality. Our approach also achieves high values on other key metrics such as F1-score and Area under ROC curve. The work reported here can be useful in many other contexts where providing automatic quality assessment in a digital repository of textual information is paramount.


Archive | 2013

Evaluating the Quality of Educational Answers in Community Question-Answering

Erik Choi; Vanessa Kitzie; Chirag Shah

While social question-answering (SQA) services are becoming increasingly popular, there is often an issue of unsatisfactory or missing information for a question posed by an information seeker. This study creates a model to predict question failure, or a question that does not receive an answer, within the social Q&A site Yahoo! Answers. To do so, observed shared characteristics of failed questions were translated into empirical features, both textual and non-textual in nature, and measured using machine extraction methods. A classifier was then trained using these features and tested on a data set of 400 questions—half of them successful, half not—to determine the accuracy of the classifier in identifying failed questions. The results show the substantial ability of the approach to correctly identify the likelihood of success or failure of a question, resulting in a promising tool to automatically identify ill-formed questions and/or questions that are likely to fail and make suggestions on how to revise them.


association for information science and technology | 2015

A machine learning-based approach to predicting success of questions on social question-answering

Erik Choi; Michal Borkowski; Julien Zakoian; Katie Sagan; Kent Scholla; Crystal Ponti; Michal Labedz; Maciek Bielski

In this paper, we present data findings from the pilot study focusing on utilizing content moderators from Brainly, a social learning Q&A platform, to assess the quality of answers. Because it can be argued that Brainly users who actively moderate contents may have better contextual understandings of how users interact with each other through question‐answering activities, and which answers are more likely relevant and appropriate to a question in a context of Brainly. The findings indicate that helpfulness, informativeness, and relevance are the most critical factors that have impacts on the quality of answers. Further content analysis also identified two new criteria : 1) descriptiveness – evaluating how well answers provide descriptive summaries through detailed and additional information, and 2) explicitness – clearly constructing answers to reduce vagueness of what information answerers intend to provide to satisfy an askers need.


Journal of Information Science | 2017

Utilizing content moderators to investigate critical factors for assessing the quality of answers on brainly, social learning Q&A platform for students: A pilot study: Utilizing Content Moderators to Investigate Critical Factors for Assessing the Quality of Answers on Brainly, Social Learning Q&A Platform for Student

Erik Choi; Chirag Shah

Q&A services allow one to express an information need in the form of a natural language question and seek information from users of those services. Despite a recent rise in the research related to various issues of online Q&A, there is still a lack of consideration for how the situational context behind asking a question affects quality judgements. By focusing on users’ expectations when asking a question, the work reported here builds on a framework of understanding how people assess information. Mixed method analysis – employing sequentially the Internet-based survey, diary and interviews – was used in a study to investigate this issue. A total of 226 online Q&A users participated in the study, and it was found that looking for quick responses, looking for additional or alternative information, and looking for accurate or complete information were the primary expectations of the askers. Findings can help identify why and how users engage in information seeking within an online Q&A context, and may help develop more comprehensive personalised approaches to deriving information relevance and satisfaction that include user expectations.


Archive | 2013

Asking for more than an answer: What do askers expect in online Q&A services?

Erik Choi; Craig R. Scott; Chirag Shah

Social Q&A (SQA) services have been growing in popularity among health information seekers. Even though research has paid much attention to a variety of characteristics of SQA services to investigate how people interact with each other for seeking and sharing information, the issues of identity and anonymity in these services that might relate to key user outcomes have been understudied. Such issues are especially important when dealing with stigmatized health conditions or sensitive health-related questions where choices are made about the revealing and concealing of identifying information in SQA environments. In the current study, we identified 110 stigmatized health questions from Yahoo! Answers that contained varied amounts and types of identity information corresponding to a framework developed in the study. We found that there are differences for providing personal contact information in one’s profile when relating identity information in user profiles to identity information in user questions. Questions with a high amount of demographic information in questions tend to receive slightly higher average number of responses and take shorter time to receive the best answer for stigmatized health questions.

Collaboration


Dive into the Erik Choi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge