Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vanessa Kitzie is active.

Publication


Featured researches published by Vanessa Kitzie.


Journal of the Association for Information Science and Technology | 2012

Social Q&A and virtual reference—comparing apples and oranges with the help of experts and users

Chirag Shah; Vanessa Kitzie

Online question-answering (Q&A) services are becoming increasingly popular among information seekers. We divide them into two categories, social Q&A (SQA) and virtual reference (VR), and examine how experts (librarians) and end users (students) evaluate information within both categories. To accomplish this, we first performed an extensive literature review and compiled a list of the aspects found to contribute to a “good” answer. These aspects were divided among three high-level concepts: relevance, quality, and satisfaction. We then interviewed both experts and users, asking them first to reflect on their online Q&A experiences and then comment on our list of aspects. These interviews uncovered two main disparities. One disparity was found between users’ expectations with these services and how information was actually delivered among them, and the other disparity between the perceptions of users and experts with regard to the aforementioned three characteristics of relevance, quality, and satisfaction. Using qualitative analyses of both the interviews and relevant literature, we suggest ways to create better hybrid solutions for online Q&A and to bridge the gap between experts’ and users’ understandings of relevance, quality, and satisfaction, as well as the perceived importance of each in contributing to a good answer.


Proceedings of the American Society for Information Science and Technology | 2012

Developing a typology of online Q&A models and recommending the right model for each question type

Erik Choi; Vanessa Kitzie; Chirag Shah

Although online Q&A services have increased in popularity, the field lacks a comprehensive typology to classify different kinds of services into model types. This poster categorizes online Q&A services into four model types – community-based, collaborative, expert-based, and social. Drawing such a distinction between online Q&A models provides an overview for how these different types of online Q&A models differ from each other and suggests implications for mitigating weaknesses and bolstering strengths of each model based on the types of questions that are addressed within each. To demonstrate differences among these models an appropriate service was selected for each of them. Then, 500 questions were collected and analyzed for each of these services to classify question types into four categories – information-seeking, advice-seeking, opinion-seeking, and non-information seeking. The findings suggest that information-seeking questions appear to be more suitable in either a collaborative Q&A environment or an expert-based Q&A environment, while opinion-seeking questions are more common in community-based Q&A. Social Q&A, on the other hand, provides an active forum for either seeking personal advice or seeking non-information related to either self-expression or self-promotion.


Information, Communication & Society | 2014

Testing the validity of social capital measures in the study of information and communication technologies

Lora Appel; Punit Dadlani; Maria Dwyer; Keith N. Hampton; Vanessa Kitzie; Ziad Matni; Patricia Moore; Rannie Teodoro

Social capital has been considered a cause and consequence of various uses of new information and communication technologies (ICTs). However, there is a growing divergence between how social capital is commonly measured in the study of ICTs and how it is measured in other fields. This departure raises questions about the validity of some of the most widely cited studies of social capital and ICTs. We compare the Internet Social Capital Scales (ISCS) developed by Williams [2006. On and off the ’net: scales for social capital in an online era. Journal of Computer-Mediated Communication, 11(2), 593–628. doi: 10.1111/j.1083-6101.2006.00029.x] – a series of psychometric scales commonly used to measure ‘social capital’ – to established, structural measures of social capital: name, position, and resource generators. Based on a survey of 880 undergraduate students (the population to which the ISCS has been most frequently administered), we find that, unlike structural measures, the ISCS does not distinguish between the distinct constructs of bonding and bridging social capital. The ISCS does not have convergent validity with structural measures of bonding or bridging social capital; it does not measure the same concept as structural measures. The ISCS conflates social capital with the related constructs of social support and attachment. The ISCS does not measure perceived or actual social capital. These findings raise concerns about the interpretations of existing studies of ‘social capital’ and ICTs that are based on the ISCS. Given the absence of measurement validity, we urge those studying social capital to abandon the ISCS in favor of alternative approaches.


Proceedings of the American Society for Information Science and Technology | 2012

“How much change do you get from 40

Chirag Shah; Marie L. Radford; Lynn Silipigni Connaway; Erik Choi; Vanessa Kitzie

Online question-answering (Q&A) services are becoming increasingly popular among information seekers. While online Q&A services encompass both virtual reference service (VRS) and social Q&A (SQA), SQA services, such as Yahoo! Answers and WikiAnswers, have experienced more success in reaching out to the masses and leveraging subsequent participation. However, the large volume of content on some of the more popular SQA sites renders participants unable to answer some posted questions adequately or even at all. To reduce this latter category of questions that do not receive an answer, the current paper explores reasons for why fact-based questions fail on a specific Q&A service. For this exploration and analyses, thousands of failed questions were collected from Yahoo! Answers extracting only those that were fact-based, information-seeking questions, while opinion/advice-seeking questions were discarded. A typology was then created to code reasons of failures for these questions using a grounded theory approach. Using this typology, suggestions are proposed for how the questions could be restructured or redirected to another Q&A service (possibly a VRS), so users would have a better chance of receiving an answer.


Journal of Information Science | 2014

?” – Analyzing and addressing failed questions on social Q&A

Chirag Shah; Vanessa Kitzie; Erik Choi

With the advent of ubiquitous connectivity and a constant flux of user-generated content, people’s online information-seeking behaviours are rapidly changing, one o f which includes seeking information from peers through online questioning. Ways to understand this new behaviour can be broken down into three aspects, also referred to as the three M’s – the modalities (sources and strategies) that people use when asking their questions online, their motivations behind asking these questions and choosing specific services, and the types and quality of the materials (content) generated in such an online Q&A environment. This article will provide a new framework – three M’s – based on the synthesis of relevant literature. It will then identify some of the gaps in our knowledge about online Q&A based on this framework. These gaps will be transformed into six research questions, stemming from the three M’s, and addressed by (a) consolidating and synthesizing findings previously reported in the literature, (b) conducting new analyses of data used in prior work, and (c) administering a new study to answer questions unaddressed by the pre-existing and new analyses of prior work.


hawaii international conference on system sciences | 2014

Modalities, motivations, and materials - investigating traditional and social online Q&A services

Chirag Shah; Vanessa Kitzie; Erik Choi

In this paper, we investigate question quality among questions posted in Yahoo! Answers to assess what factors contribute to the goodness of a question and determine if we can flag poor quality questions. Using human assessments of whether a question is good or bad and extracted textual features from the questions, we built an SVM classifier that performed with relatively good classification accuracy for both good and bad questions. We then enhanced the performance of this classifier by using additional human assessments of question type as well as additional question features to first separate questions by type and then classify them. This two-step classifier improved the performance of the original classifier in identifying Type II errors and suggests that our model presents a novel approach for identifying bad questions with implications for query revision and routing.


Proceedings of the American Society for Information Science and Technology | 2011

Questioning the Question -- Addressing the Answerability of Questions in Community Question-Answering

Vanessa Kitzie; Chirag Shah

Online question-answering (Q&A) services are becoming increasingly popular among information seekers. We divide them in two domains: social Q&A (SQA) and virtual referencing (VR) and ask what the demands and expectations are for both in satisfying information seeking needs. Using more than 30 interviews and their qualitative analysis of both experts (librarians) and end users (students), we present our findings that indicate the mismatch in experts’ and end-users’ understanding of how and when each service should be used. More importantly, we show how SQA and VR differ in their functionalities and offerings, commenting on their pros and cons, and the ways in which one could create better hybrid solutions for online Q&A services.


Archive | 2013

Faster, Better, or Both? Looking at Both Sides of Online Question-Answering Coin

Erik Choi; Vanessa Kitzie; Chirag Shah

While social question-answering (SQA) services are becoming increasingly popular, there is often an issue of unsatisfactory or missing information for a question posed by an information seeker. This study creates a model to predict question failure, or a question that does not receive an answer, within the social Q&A site Yahoo! Answers. To do so, observed shared characteristics of failed questions were translated into empirical features, both textual and non-textual in nature, and measured using machine extraction methods. A classifier was then trained using these features and tested on a data set of 400 questions—half of them successful, half not—to determine the accuracy of the classifier in identifying failed questions. The results show the substantial ability of the approach to correctly identify the likelihood of success or failure of a question, resulting in a promising tool to automatically identify ill-formed questions and/or questions that are likely to fail and make suggestions on how to revise them.


association for information science and technology | 2015

A machine learning-based approach to predicting success of questions on social question-answering

Vanessa Kitzie; Debanjan Ghosh

On December 3, 2014, after a grand jury decided not to indict the white police officer in the death of Eric Garner, the social networking platform Twitter was flooded with tweets sharing stances on racial profiling and police brutality. To examine how issues concerning race were communicated and exchanged during this time, this study compares differences between tweets using two trending hashtags #CrimingWhileWhite (#cww) and #AliveWhileBlack (#awb) from December 3 through December 11, 2014. To this end, network and content analysis are used on a large dataset of tweets containing the hashtags #awb and #cww. Findings indicate that there are clear differences, both structurally and in linguistic style, between how individuals express themselves based on which hashtag they used. Specifically, we found that #cww users disproportionately shared informational content, which may have led to the hashtag gaining more network volume and attention as a trending topic than #awb. In contrast, #awb tweets tended to be more subjective, expressing a sense of community and strong negative sentiment toward persistent structural racism.


association for information science and technology | 2016

#Criming and #alive: network and content analysis of two sides of a story on Twitter

Sarah Barriage; Wayne Buente; Elke Greifeneder; Devon Greyson; Vanessa Kitzie; Miraida Morales; Ross J. Todd

When developing ones own research agenda, early and mid‐career researchers continually negotiate how best to meet ethical standards and resolve ethical constraints using methodologically sound approaches. Often such struggles occur behind closed doors, their outcomes reflected in the institutional language of an ethical review board. This panel seeks to bring these struggles to the forefront by having panelists who study various populations discuss how they approach ethical challenges in their research. Due to the nature of the groups these panelists study, the panel provides a context where the site of ethical struggles, challenges, and tensions are exacerbated. Key issues to be discussed are: informed consent, risks to participants, and research design and dissemination. Discussion of these issues will be oriented around each participants metatheoretical orientation to research in library and information science (LIS). Adopting such an approach will highlight some of the main challenges when engaging in ethical practices that may not align with institutional standards, as well as denote possible strategies for addressing them.

Collaboration


Dive into the Vanessa Kitzie's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge