Niall J. Conroy
University of Western Ontario
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Niall J. Conroy.
association for information science and technology | 2015
Niall J. Conroy; Victoria L. Rubin; Yimin Chen
This research surveys the current state‐of‐the‐art technologies that are instrumental in the adoption and development of fake news detection. “Fake news detection” is defined as the task of categorizing news along a continuum of veracity, with an associated measure of certainty. Veracity is compromised by the occurrence of intentional deceptions. The nature of online news publication has changed, such that traditional fact checking and vetting from potential deception is impossible against the flood arising from content generators, as well as various formats and genres.
association for information science and technology | 2015
Victoria L. Rubin; Yimin Chen; Niall J. Conroy
A fake news detection system aims to assist users in detecting and filtering out varieties of potentially deceptive news. The prediction of the chances that a particular news item is intentionally deceptive is based on the analysis of previously seen truthful and deceptive news. A scarcity of deceptive news, available as corpora for predictive modeling, is a major stumbling block in this field of natural language processing (NLP) and deception detection. This paper discusses three types of fake news, each in contrast to genuine serious reporting, and weighs their pros and cons as a corpus for text analytics and predictive modeling. Filtering, vetting, and verifying online information continues to be essential in library and information science (LIS), as the lines between traditional news and online information are blurring.
Proceedings of the 2015 ACM on Workshop on Multimodal Deception Detection | 2015
Yimin Chen; Niall J. Conroy; Victoria L. Rubin
Tabloid journalism is often criticized for its propensity for exaggeration, sensationalization, scare-mongering, and otherwise producing misleading and low quality news. As the news has moved online, a new form of tabloidization has emerged: ?clickbaiting.? ?Clickbait? refers to ?content whose main purpose is to attract attention and encourage visitors to click on a link to a particular web page? [?clickbait,? n.d.] and has been implicated in the rapid spread of rumor and misinformation online. This paper examines potential methods for the automatic detection of clickbait as a form of deception. Methods for recognizing both textual and non-textual clickbaiting cues are surveyed, leading to the suggestion that a hybrid approach may yield best results.
association for information science and technology | 2015
Yimin Chen; Niall J. Conroy; Victoria L. Rubin
Widespread adoption of internet technologies has changed the way that news is created and consumed. The current online news environment is one that incentivizes speed and spectacle in reporting, at the cost of fact‐checking and verification. The line between user generated content and traditional news has also become increasingly blurred. This poster reviews some of the professional and cultural issues surrounding online news and argues for a two‐pronged approach inspired by Hemingways “automatic crap detector” (Manning, 1965) in order to address these problems: a) proactive public engagement by educators, librarians, and information specialists to promote digital literacy practices; b) the development of automated tools and technologies to assist journalists in vetting, verifying, and fact‐checking, and to assist news readers by filtering and flagging dubious information.
Proceedings of the American Society for Information Science and Technology | 2012
Victoria L. Rubin; Niall J. Conroy
One of the novel research directions in Natural Language Processing and Machine Learning involves creating and developing methods for automatic discernment of deceptive messages from truthful ones. Mistaking intentionally deceptive pieces of information for authentic ones (true to the writer’s beliefs) can create negative consequences, since our everyday decision-making, actions, and mood are often impacted by information we encounter. Such research is vital today as it aims to develop tools for the automated recognition of deceptive, disingenuous or fake information (the kind intended to create false beliefs or conclusions in the reader’s mind). The ultimate goal is to support truthfulness ratings that signal the trustworthiness of the retrieved information, or alert information seekers to potential deception. To proceed with this agenda, we require elicitation techniques for obtaining samples of both deceptive and truthful messages from study participants in various subject areas. A data collection, or a corpus of truths and lies, should meet certain basic criteria to allow for meaningful analysis and comparison of socio-linguistic behaviors. In this paper we propose solutions and weigh pros and cons of various experimental set-ups in the art of corpus building. The outcomes of three experiments demonstrate certain limitations with using online crowdsourcing for data collection of this type. Incorporating motivation in the task descriptions, and the role of visual context in creating deceptive narratives are other factors that should be addressed in future efforts to build a quality dataset.
Journal of the Association for Information Science and Technology | 2017
Lu Xiao; Niall J. Conroy
Offering ones perspective and justifying it has become a common practice in online text‐based communications, just as it is in typical, face‐to‐face communication. Compared to the face‐to‐face communications, it can be particularly more challenging for users to understand and evaluate anothers perspective in online communications. On the other hand, the availability of the communication record in online communications offers a potential to leverage computational techniques to automatically detect user opinions and rationales. One promising approach to automatically detect the rationales is to detect the common discourse relations in rationale texts. However, no empirical work has been done with regard to which discourse relations are commonly present in the users’ rationales in online communications. To fill this gap, we annotated the discourse relations in the text segments that contain the rationales (N = 527 text segments). These text segments are obtained from five datasets that consist of five online posts and the first 100 comments. We identified 10 discourse relations that are commonly present in this sample. Our finding marks an important contribution to this rationale detection approach. We encourage more empirical work, preferably with a larger sample, to examine the generalizability of our findings.
Proceedings of the Second Workshop on Computational Approaches to Deception Detection | 2016
Victoria L. Rubin; Niall J. Conroy; Yimin Chen; Sarah Cornwell
First Monday | 2012
Victoria L. Rubin; Niall J. Conroy
Archive | 2015
Victoria L. Rubin; Niall J. Conroy; Yimin Chen
Proceedings of the American Society for Information Science and Technology | 2011
Victoria L. Rubin; Niall J. Conroy