Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leonard Hoon is active.

Publication


Featured researches published by Leonard Hoon.


australasian computer-human interaction conference | 2012

A preliminary analysis of mobile app user reviews

Rajesh Vasa; Leonard Hoon; Kon Mouzakis; Akihiro Noguchi

The advent of online software distribution channels like Apple Inc.s App Store and Google Inc.s Google Play has offered developers a single, low cost, and powerful distribution mechanism. These online stores help users discover apps as well as leave a review. Ratings and reviews add value to both the developer and potential new users by providing a crowd-sourced indicator of app quality. Hence, for developers it is important to get positive reviews and high ratings to ensure that an app has a viable future. But, what exactly do users say on these app stores? And more importantly, what is the experience that compels a user to leave either a positive or a negative rating? Our analysis of 8.7 million reviews from 17,330 apps shows that users tend to leave short, yet informative reviews, and the rating as well as the category influences the length of a review. In this preliminary study, we found that users will leave longer messages when they rate an app poorly, and the depth of feedback in certain categories is significantly higher than for other.


australasian computer-human interaction conference | 2012

A preliminary analysis of vocabulary in mobile app user reviews

Leonard Hoon; Rajesh Vasa; Jean-Guy Schneider; Kon Mouzakis

Online software distribution channels such as Apple Inc.s App Store and Google Inc.s Google Play provide a platform for third-party app distribution. These online stores feature a public review system, allowing users to express opinions regarding purchased apps. These reviews can influence product-purchasing decisions via polarised sentiment (1 to 5 stars) and user expressed opinion. For developers, reviews are a user-facing crowd-sourced indicator of app quality. Hence, high ratings and positive reviews affect the viability of an apps commercial feasibility. However, it is less clear what information is contained within these reviews, and more importantly, if an analysis of these reviews can inform developers of design priorities as opposed to just influencing purchasing decisions. We analysed 8.7 million reviews from 17,330 apps on the App Store and found that the most frequently used words in user reviews lean toward expressions of sentiment despite employment of only approximately 37% of the words within the English language dictionary. Furthermore, the range of words used to express negative opinions is significantly higher than when positive sentiments are expressed.


australasian computer-human interaction conference | 2013

Awesome!: conveying satisfaction on the app store

Leonard Hoon; Rajesh Vasa; Gloria Yoanita Martino; Jean-Guy Schneider; Kon Mouzakis

In a competitive market like the App Store, high user perceived quality is paramount, especially due to the public review system offered. These reviews give developers feedback on their own apps, as well as help provide data for competitor analysis. However, given the size of the data set, manual analysis of reviews is unrealistic, especially given the need for a rapid response to changing market dynamics. Current research into mobile app reviews has provided an insight into the statistical distributions, but there is minimal knowledge about the content in these reviews. In particular, we do not know if the aggregated numerical rating is a reliable indicator of the information within a review. This work reports on an analysis of reviews to determine how closely aligned the numerical ratings are to the textual description. We observed that short user reviews mostly contain a small set of words, and the corresponding numerical rating matches the underlying sentiment.


international workshop on principles of software evolution | 2010

Do metrics help to identify refactoring

Jean-Guy Schneider; Rajesh Vasa; Leonard Hoon

Many iterative software development methodologies, such as for example eXtreme Programming, state that refactoring is one of the key activities to be undertaken in order to keep the code-base of a project well-structured and consistent. In such a context, poorly structured code may become a significant obstacle in adding new or in enhancing existing functionality. However, there is some anecdotal evidence that in many software projects, the underlying code-base is not necessarily refactored post-release, often due to time constraints or the misconception that refactoring does not add any apparent value. In order to get further insights into this problem area, we propose to investigate the usage frequency of refactorings in the context of open-source, object-oriented software systems. In this work, we will outline our approach to detecting refactoring and present results obtained from an initial pilot study.


Trends and Applications in Software Engineering / Jezreel Mejia, Mirna Munoz, Alvaro Rocha and Jose Calvo-Manzano (eds.) | 2016

App Reviews: Breaking the User and Developer Language Barrier

Leonard Hoon; Miguel Ángel Rodríguez-García; Rajesh Vasa; Rafael Valencia-García; Jean-Guy Schneider

Apple, Google and third party developers offer apps across over twenty categories for various smart mobile devices. Offered exclusively through the App Store and Google Play, each app allows users to review the app and their experience with it. Current literature offers a general statistical picture of these reviews, and a broad overview of the nature of discontent of apps. However, we do not yet have a good framework to classify user reviews against known software quality attributes like performance or usability. In order to close this gap, in this paper, we develop an ontology encompassing software attributes derived from software quality models. This decomposes into approximately five thousand words that users employ to review apps. By identifying a consistent set of vocabulary that users communicate with, we can sanitise large datasets to extract stakeholder actionable information from reviews. The findings offered in this paper assists future app review analysis by bridging end-user communication and software engineering vocabulary.


australasian computer-human interaction conference | 2016

Does textual word-of-mouth affect look and feel?

Milica Stojmenovic; John C. Grundy; Vivienne Farrell; Robert Biddle; Leonard Hoon

In the field of HCI, website usability and visual appeal have been studied extensively. Participant experience with a website genre influences the use and perception of the website. Word-of-Mouth (WOM), such as user reviews, influences users in hotel, restaurant, movie, and many other e-commerce domains. Thus, a companys or products reputation can alter a consumers behaviour towards that product. Our work aimed to acquire an understanding of the effect of textual WOM on usability and visual appeal. This is a novel approach to the topic. This research was undertaken using an unfamiliar city council website to exclude the influence of ones own past experiences and to allow for greater control of the textual WOM. We found that visual appeal, objective and subjective usability were all influenced by text that established reputations.


australasian computer-human interaction conference | 2016

Agree to disagree: on labelling helpful app reviews

Andrew Simmons; Leonard Hoon

Mobile apps designers seek to prioritise and refine app features so as to optimise user experience across the ensemble of possible situations and contexts in which the app is used. App reviews---some helpful, others irrelevant---can be analysed for feedback on this user experience. However, few studies have specifically examined the helpfulness of app reviews. In this paper, we surveyed users and developers to rate 167 reviews for helpfulness, obtaining a total of 2,558 helpfulness ratings captured on a 5 point Likert scale. We found only slight agreement (nominal Krippendorffs alpha = 0.039) between participants on the helpfulness of reviews. Differences between reviews become evident when we summarise all the helpfulness ratings per review. We conclude that the disagreement among users limits the potential of mobile app review recommender systems.


australasian computer-human interaction conference | 2016

Spreading word: author frequency of app user reviews

Leonard Hoon; Milica Stojmenovic; Rajesh Vasa; Graham Farrell

App stores allow developers to publish new updates directly to users. Users evaluate and leave public reviews of their opinions and experiences for others to see. App ratings and reviews are a purchase determinant for users, and are free user-based usability tests. Existing literature offers approaches to extract information from or to summarise app reviews, but what can we say about the authors themselves? We analysed about 8.7 million iOS app reviews written by over 5.5 million unique authors. We found that 71.5% of authors only wrote one review. Only 13,224 instances of authors re-reviewing were observed, by 12,667 authors for 3,345 apps.


Archive | 2013

An Analysis of the Mobile App Review Landscape: Trends and Implications

Leonard Hoon; Rajesh Vasa; Jean-Guy Schneider; John C. Grundy


Archive | 2013

Socrates mobile app review dataset

Kon Mouzakis; Leonard Hoon; Rajesh Vasa

Collaboration


Dive into the Leonard Hoon's collaboration.

Top Co-Authors

Avatar

Rajesh Vasa

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kon Mouzakis

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jean-Guy Schneider

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Felix Ter Chian Tan

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akihiro Noguchi

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge