Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrea Di Sorbo is active.

Publication


Featured researches published by Andrea Di Sorbo.


international conference on software maintenance | 2015

How can i improve my app? Classifying user reviews for software maintenance and evolution

Sebastiano Panichella; Andrea Di Sorbo; Emitza Guzman; Corrado Aaron Visaggio; Gerardo Canfora; Harald C. Gall

App Stores, such as Google Play or the Apple Store, allow users to provide feedback on apps by posting review comments and giving star ratings. These platforms constitute a useful electronic mean in which application developers and users can productively exchange information about apps. Previous research showed that users feedback contains usage scenarios, bug reports and feature requests, that can help app developers to accomplish software maintenance and evolution tasks. However, in the case of the most popular apps, the large amount of received feedback, its unstructured nature and varying quality can make the identification of useful user feedback a very challenging task. In this paper we present a taxonomy to classify app reviews into categories relevant to software maintenance and evolution, as well as an approach that merges three techniques: (1) Natural Language Processing, (2) Text Analysis and (3) Sentiment Analysis to automatically classify app reviews into the proposed categories. We show that the combined use of these techniques allows to achieve better results (a precision of 75% and a recall of 74%) than results obtained using each technique individually (precision of 70% and a recall of 67%).


foundations of software engineering | 2016

What would users change in my app? summarizing app reviews for recommending software changes

Andrea Di Sorbo; Sebastiano Panichella; Carol V. Alexandru; Junji Shimagaki; Corrado Aaron Visaggio; Gerardo Canfora; Harald C. Gall

Mobile app developers constantly monitor feedback in user reviews with the goal of improving their mobile apps and better meeting user expectations. Thus, automated approaches have been proposed in literature with the aim of reducing the effort required for analyzing feedback contained in user reviews via automatic classification/prioritization according to specific topics. In this paper, we introduce SURF (Summarizer of User Reviews Feedback), a novel approach to condense the enormous amount of information that developers of popular apps have to manage due to user feedback received on a daily basis. SURF relies on a conceptual model for capturing user needs useful for developers performing maintenance and evolution tasks. Then it uses sophisticated summarisation techniques for summarizing thousands of reviews and generating an interactive, structured and condensed agenda of recommended software changes. We performed an end-to-end evaluation of SURF on user reviews of 17 mobile apps (5 of them developed by Sony Mobile), involving 23 developers and researchers in total. Results demonstrate high accuracy of SURF in summarizing reviews and the usefulness of the recommended changes. In evaluating our approach we found that SURF helps developers in better understanding user needs, substantially reducing the time required by developers compared to manually analyzing user (change) requests and planning future software changes.


automated software engineering | 2015

Development Emails Content Analyzer: Intention Mining in Developer Discussions (T)

Andrea Di Sorbo; Sebastiano Panichella; Corrado Aaron Visaggio; Massimiliano Di Penta; Gerardo Canfora; Harald C. Gall

Written development communication (e.g. mailing lists, issue trackers) constitutes a precious source of information to build recommenders for software engineers, for example aimed at suggesting experts, or at redocumenting existing source code. In this paper we propose a novel, semi-supervised approach named DECA (Development Emails Content Analyzer) that uses Natural Language Parsing to classify the content of development emails according to their purpose (e.g. feature request, opinion asking, problem discovery, solution proposal, information giving etc), identifying email elements that can be used for specific tasks. A study based on data from Qt and Ubuntu, highlights a high precision (90%) and recall (70%) of DECA in classifying email content, outperforming traditional machine learning strategies. Moreover, we successfully used DECA for re-documenting source code of Eclipse and Lucene, improving the recall, while keeping high precision, of a previous approach based on ad-hoc heuristics.


foundations of software engineering | 2016

ARdoc: app reviews development oriented classifier

Sebastiano Panichella; Andrea Di Sorbo; Emitza Guzman; Corrado Aaron Visaggio; Gerardo Canfora; Harald C. Gall

Google Play, Apple App Store and Windows Phone Store are well known distribution platforms where users can download mobile apps, rate them and write review comments about the apps they are using. Previous research studies demonstrated that these reviews contain important information to help developers improve their apps. However, analyzing reviews is challenging due to the large amount of reviews posted every day, the unstructured nature of reviews and its varying quality. In this demo we present ARdoc, a tool which combines three techniques: (1) Natural Language Parsing, (2) Text Analysis and (3) Sentiment Analysis to automatically classify useful feedback contained in app reviews important for performing software maintenance and evolution tasks. Our quantitative and qualitative analysis (involving mobile professional developers) demonstrates that ARdoc correctly classifies feedback useful for maintenance perspectives in user reviews with high precision (ranging between 84% and 89%), recall (ranging between 84% and 89%), and F-Measure (ranging between 84% and 89%). While evaluating our tool developers of our study confirmed the usefulness of ARdoc in extracting important maintenance tasks for their mobile applications. Demo URL: https://youtu.be/Baf18V6sN8E Demo Web Page: http://www.ifi.uzh.ch/seal/people/panichella/tools/ARdoc.html


2015 Mobile Systems Technologies Workshop (MST) | 2015

Obfuscation Techniques against Signature-Based Detection: A Case Study

Gerardo Canfora; Andrea Di Sorbo; Francesco Mercaldo; Corrado Aaron Visaggio

Android malware is increasingly growing interms of complexity. In order to evade signature-based detection, which represents the most adopted technique by current antimalware vendors, malware writers begin to deploy malware with the ability to change their code as they propagate.In this paper, our aim is to evaluate the robustness of Android antimalware tools when various evasion techniques are used to obfuscate malicious payloads. To support this assessment we realized a tool which applies a number of common transformations on the code of malware applications, and applied these transformations to about 5000 malware apps. Our results demonstrate that, after the code transformations, the malware is not detected by a large set of antimalware tools,even when, before applying the transformations, malware was correctly identified by most antimalware tools. Such outcomes suggest that malware detection methods must be quickly re-designed for protecting successfully smart devices.


international conference on software engineering | 2016

DECA: development emails content analyzer

Andrea Di Sorbo; Sebastiano Panichella; Corrado Aaron Visaggio; Massimiliano Di Penta; Gerardo Canfora; Harald C. Gall

Written development discussions occurring over different communication means (e.g. issue trackers, development mailing lists, or IRC chats) represent a precious source of information for developers, as well as for researchers interested to build recommender systems. Such discussions contain text having different purposes, e.g. discussing feature requests, bugs to fix etc. In this context, the manual classification or filtering of such discussions in according to their purpose would be a daunting and time-consuming task. In this demo we present DECA (Development Emails Content Analyzer), a tool which uses Natural Language Parsing to classify the content of development emails according to their purpose, solution proposal, information giving, etc.), identifying email fragments that can be used for specific maintenance tasks. We applied DECA on the discussions occurring on the development mailing lists related to Qt and Ubuntu projects. The results highlight a high precision (90%) and recall (70%) of DECA in classifying email content providing useful information to developers interested in accomplishing specific development tasks. Demo URL: https://youtu.be/FmwBuBaW6Sk Demo Web Page: http://www.ifi.uzh.ch/seal/people/panichella/tools/DECA.html


international conference on software engineering | 2017

SURF: summarizer of user reviews feedback

Andrea Di Sorbo; Sebastiano Panichella; Carol V. Alexandru; Corrado Aaron Visaggio; Gerardo Canfora

Continuous Delivery (CD) enables mobile developers to release small, high quality chunks of working software in a rapid manner. However, faster delivery and a higher software quality do neither guarantee user satisfaction nor positive business outcomes. Previous work demonstrates that app reviews may contain crucial information that can guide developers software maintenance efforts to obtain higher customer satisfaction. However, previous work also proves the difficulties encountered by developers in manually analyzing this rich source of data, namely (i) the huge amount of reviews an app may receive on a daily basis and (ii) the unstructured nature of their content. In this paper, we propose SURF (Summarizer of User Reviews Feedback), a tool able to (i) analyze and classify the information contained in app reviews and (ii) distill actionable change tasks for improving mobile applications. Specifically, SURF performs a systematic summarization of thousands of user reviews through the generation of an interactive, structured and condensed agenda of recommended software changes. An end-to-end evaluation of SURF, involving 2622 reviews related to 12 different mobile applications, demonstrates the high accuracy of SURF in summarizing user reviews content. In evaluating our approach we also involve the original developers of some apps, who confirm the practical usefulness of the software change recommendations made by SURF. Demo URL: https://youtu.be/Yf-U5ylJXvo Demo webpage: http://www.ifi.uzh.ch/en/seal/people/panichella/tools/SURFTool.html.


product focused software process improvement | 2016

Exploring Mobile User Experience Through Code Quality Metrics

Gerardo Canfora; Andrea Di Sorbo; Francesco Mercaldo; Corrado Aaron Visaggio

Smartphones have been absorbed into everyday life at an astounding rate, and continue to become more and more widely used. Much of the success of the mobile paradigm can be attributed to the discover of a huge market. Users may pick from a large collection of software, in domains ranging from games to productivity. Each platform makes the task of installing and removing apps very simple, further inciting users to try new software. Smartphone users may download applications from the official Google Play market, but those applications do not pass any review process, and can be downloaded very shortly after submission. Google Play does not offer any mechanism to ensure the user about the quality of the installed app, and this is particularly true for user experience: the user simply downloads and runs the application. In this paper we propose a features set to evaluate the code quality of Android applications to understand how user experience varies in mobile ecosystem. Our findings show that developers need to focus on software quality in order to make their applications usable from the user point of view.


Journal of Software: Evolution and Process | 2018

An exploratory study on the evolution of Android malware quality: An exploratory study on the evolution of Android malware quality

Francesco Mercaldo; Andrea Di Sorbo; Corrado Aaron Visaggio; Aniello Cimitile; Fabio Martinelli

In the context of software engineering, product software quality measures how well a software artifact is designed and coded. Software products must satisfy nonfunctional properties (eg, reliability, usability, understandability, and maintainability), in order to make maintenance and evolution sustainable in the long period. Software evolution is an issue of interest for the malware writers, too, for 2 reasons. First, to evade detection with the minimum effort, malware writers use to produce “variants,” which are obtained by applying little changes to existing malware. Morevoer, recent studies demonstrated that malware is increasingly improving evasion strategies and infection mechanisms and is using more and more complex payloads. This suggests that malware writers are devoting relevant efforts and skills for producing high‐quality software. For this reason, we wonder whether malware writers are devoting effort to improve the structural quality of their code, too, as it happens in the development of goodware. To investigate this question, we (1) characterize a dataset containing about 20 000 Android applications, divided into goodware and malware ones, relying on the Android API version they require, and (2) compute software quality metrics, divided into 4 categories (ie, dimensional, complexity, object‐oriented, and Android‐oriented metrics) for apps belonging to each population. We then identify evolution trends of these metrics in malware and goodware. The results of our study demonstrate that goodware and malicious applications exhibit similar evolution trends for some of the quality indicators, suggesting that malware writers care about the overall quality of their code. Code quality could be considered an indirect measure of how many and how fast variants of existing malware will be released in the wild.


Proceedings of the 2nd ACM SIGSOFT International Workshop on App Market Analytics | 2017

Android apps and user feedback: a dataset for software evolution and quality improvement

Giovanni Grano; Andrea Di Sorbo; Francesco Mercaldo; Corrado Aaron Visaggio; Gerardo Canfora; Sebastiano Panichella

Collaboration


Dive into the Andrea Di Sorbo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge