Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Snehasish Banerjee is active.

Publication


Featured researches published by Snehasish Banerjee.


Computers in Human Behavior | 2016

Helpfulness of user-generated reviews as a function of review sentiment, product type and information quality

Alton Yeow-Kuan Chua; Snehasish Banerjee

Helpfulness of user-generated reviews has not been studied adequately in terms of the interplay between review sentiment (favorable, unfavorable and mixed) and product type (search and experience). Moreover, the ways in which information quality relates to review helpfulness remain largely unknown. Hence, this paper seeks to answer the following two research questions: (1) How does the helpfulness of user-generated reviews vary as a function of review sentiment and product type? (2) How does information quality relate to the helpfulness of user-generated reviews across review sentiment and product type? Data included 2190 reviews drawn from Amazon for three search products-digital cameras, cell phones, and laser printers-as well as three experience products-books, skin care, and music albums. Review sentiment was ascertained based on star ratings. Investigation of the research questions relied on the statistical procedures of analysis of variance, and multiple regression. Review helpfulness was found to vary across review sentiment independent of product type. Besides, the relationship between information quality and review helpfulness was found to vary as a function of review sentiment as well as product type. The paper concludes with a number of implications for research and practice. Data were drawn from 2190 Amazon reviews for search and experience product types.Review sentiment was ascertained based on star ratings.Review helpfulness varied across review sentiment regardless of product type.Information quality and helpfulness varied disparately across review sentiment.Information quality and helpfulness varied disparately across product type.


Journal of the Association for Information Science and Technology | 2013

So fast so good: An analysis of answer quality and answer speed in community Question‐answering sites

Alton Yeow-Kuan Chua; Snehasish Banerjee

The authors investigate the interplay between answer quality and answer speed across question types in community question-answering sites (CQAs). The research questions addressed are the following: (a) How do answer quality and answer speed vary across question types? (b) How do the relationships between answer quality and answer speed vary across question types? (c) How do the best quality answers and the fastest answers differ in terms of answer quality and answer speed across question types? (d) How do trends in answer quality vary over time across question types? From the posting of 3,000 questions in six CQAs, 5,356 answers were harvested and analyzed. There was a significant difference in answer quality and answer speed across question types, and there were generally no significant relationships between answer quality and answer speed. The best quality answers had better overall answer quality than the fastest answers but generally took longer to arrive. In addition, although the trend in answer quality had been mostly random across all question types, the quality of answers appeared to improve gradually when given time. By highlighting the subtle nuances in answer quality and answer speed across question types, this study is an attempt to explore a territory of CQA research that has hitherto been relatively uncharted.


Journal of the Association for Information Science and Technology | 2015

Understanding review helpfulness as a function of reviewer reputation, review rating, and review depth

Alton Yeow-Kuan Chua; Snehasish Banerjee

This article examines review helpfulness as a function of reviewer reputation, review rating, and review depth. In drawing data from the popular review platform Amazon, results indicate that review helpfulness is positively related to reviewer profile and review depth but is negatively related to review rating. Users seem to have a proclivity for reviews contributed by reviewers with a positive track record. They also appreciate reviews with lambasting comments and those with adequate depth. By highlighting its implications for theory and practice, the article concludes with limitations and areas for further research.


science and information conference | 2014

Applauses in hotel reviews: Genuine or deceptive?

Snehasish Banerjee; Alton Yeow-Kuan Chua

With the profusion of social media, users increasingly browse through hotel reviews posted in websites such as TripAdvisor.com or Expedia.com to make a booking. Concurrently, contributing deceptive reviews to unduly applaud hotels is fast becoming a well-established e-business malpractice. Therefore, analyzing differences between genuine and deceptive reviews has become a pressing issue. Though such differences are generally difficult to detect, there could be telltale signs in terms of readability, genre, and writing style of reviews. This paper thus conducts a linguistic analysis to investigate the extent to which readability, genre, and writing style could predict review authenticity. Drawing data from a publicly available secondary dataset that includes 400 genuine and 400 deceptive reviews for hotels, results indicate that readability and writing style could be significant predictors of review authenticity. With respect to review genre however, differences between genuine and deceptive reviews appeared largely blurred. The implications of the findings are discussed. Finally, the paper concludes with notes on limitations and future research directions.


Journal of Information Science | 2015

Answers or no answers

Alton Yeow-Kuan Chua; Snehasish Banerjee

Some questions posted in community question answering sites (CQAs) fail to attract a single answer. To address the growing volumes of unanswered questions in CQAs, the objective of this paper is two-fold. First, it aims to develop a conceptual framework known as the Quest-for-Answer to explain why some questions in CQAs draw answers while others remain ignored. The framework suggests that the answerability of questions depends on both metadata and content. Second, the paper attempts to empirically validate the Quest-for-Answer framework through a case study of Stack Overflow. A total of 3000 questions divided equally between those answered and unanswered were used for analysis. The Quest-for-Answer framework yielded generally promising results. With respect to metadata, asker’s popularity, participation and asking time of questions were found to be significant in predicting if answers would be forthcoming. With respect to content, level of details, specificity, clarity and the socio-emotional value of questions were significant in enhancing or impeding responses.


Online Information Review | 2014

A theoretical framework to identify authentic online reviews

Snehasish Banerjee; Alton Yeow-Kuan Chua

Purpose – The purpose of this paper is to investigate the extent to which textual characteristics of online reviews help identify authentic entries from manipulative ones across positive and negative comments. Design/methodology/approach – A theoretical framework is proposed to identify authentic online reviews from manipulative ones based on three textual characteristics, namely, comprehensibility, informativeness, and writing style. The framework is tested using two publicly available data sets, one comprising positive reviews to hype own offerings, and the other including negative reviews to slander competing offerings. Logistic regression is used for analysis. Findings – The three textual characteristics offered useful insights to identify authentic online reviews from manipulative ones. In particular, the differences between authentic and manipulative reviews in terms of comprehensibility and informativeness were more conspicuous for negative entries. On the other hand, the differences between authen...


international conference on ubiquitous information management and communication | 2015

Using supervised learning to classify authentic and fake online reviews

Snehasish Banerjee; Alton Yeow-Kuan Chua; Jung-Jae Kim

Before making a purchase, users are increasingly inclined to browse online reviews that are posted to share post-purchase experiences of products and services. However, not all reviews are necessarily authentic. Some entries could be fake yet written to appear authentic. Conceivably, authentic and fake reviews are not easy to differentiate. Hence, this paper uses supervised learning algorithms to analyze the extent to which authentic and fake reviews could be distinguished based on four linguistic clues, namely, understandability, level of details, writing style, and cognition indicators. The model performance was compared with two baselines. The results were generally promising.


Online Information Review | 2015

Measuring the effectiveness of answers in Yahoo! Answers

Alton Yeow-Kuan Chua; Snehasish Banerjee

Purpose – The purpose of this paper is to investigate the ways in which effectiveness of answers in Yahoo! Answers, one of the largest community question answering sites (CQAs), is related to question types and answerer reputation. Effective answers are defined as those that are detailed, readable, superior in quality and contributed promptly. Five question types that were studied include factoid, list, definition, complex interactive and opinion. Answerer reputation refers to the past track record of answerers in the community. Design/methodology/approach – The data set comprises 1,459 answers posted in Yahoo! Answers in response to 464 questions that were distributed across the five question types. The analysis was done using factorial analysis of variance. Findings – The results indicate that factoid, definition and opinion questions are comparable in attracting high quality as well as readable answers. Although reputed answerers generally fared better in offering detailed and high-quality answers, nov...


international conference on digital information management | 2014

Understanding the process of writing fake online reviews

Snehasish Banerjee; Alton Yeow-Kuan Chua

Although the prevalence of fake online reviews for products and services is deemed to have become an epidemic, little is known about the strategies used to write such bogus entries. Hence, this paper conducts an exploratory study to understand the process by which fake reviews are written. Participants were invited to write fake reviews for hotels in a research setting. Thereafter, they were asked to answer a questionnaire which solicited qualitative responses about their strategies. Results indicate that the process of writing fake reviews commences with extensive information gathering via common review websites such as TripAdvisor as well as search engines such as Google. The gathered information is then used as cues to write short, catchy and succinct fake review titles, as well as informative and subjective fake review descriptions. Adequate efforts are generally invested to blur the lines between fake reviews and authentic entries.


association for information science and technology | 2017

Don't be deceived: Using linguistic analysis to learn how to discern online review authenticity

Snehasish Banerjee; Alton Yeow-Kuan Chua; Jung-Jae Kim

This article uses linguistic analysis to help users discern the authenticity of online reviews. Two related studies were conducted using hotel reviews as the test case for investigation. The first study analyzed 1,800 authentic and fictitious reviews based on the linguistic cues of comprehensibility, specificity, exaggeration, and negligence. The analysis involved classification algorithms followed by feature selection and statistical tests. A filtered set of variables that helped discern review authenticity was identified. The second study incorporated these variables to develop a guideline that aimed to inform humans how to distinguish between authentic and fictitious reviews. The guideline was used as an intervention in an experimental setup that involved 240 participants. The intervention improved human ability to identify fictitious reviews amid authentic ones.

Collaboration


Dive into the Snehasish Banerjee's collaboration.

Top Co-Authors

Avatar

Alton Yeow-Kuan Chua

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Jung-Jae Kim

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Anjan Pal

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Ang Han Guan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Liew Jun Xian

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Loo Geok Pee

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Peng Peng

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge