Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Igor Scaliante Wiese is active.

Publication


Featured researches published by Igor Scaliante Wiese.


international conference on software engineering | 2014

The hard life of open source software project newcomers

Igor Steinmacher; Igor Scaliante Wiese; Tayana Conte; Marco Aurélio Gerosa; David F. Redmiles

While onboarding an open source software (OSS) project, contributors face many different barriers that hinder their contribution, leading in many cases to dropouts. Many projects leverage the contribution of outsiders and the sustainability of the project relies on retaining some of these newcomers. In this paper, we discuss some barriers faced by newcomers to OSS. The barriers were identified using a qualitative analysis on data obtained from newcomers and members of OSS projects. We organize the results in a conceptual model composed of 38 barriers, grouped into seven different categories. These barriers may motivate new studies and the development of appropriate tooling to better support the onboarding of new contributors.


Proceedings of the Third International Workshop on Recommendation Systems for Software Engineering | 2012

Recommending mentors to software project newcomers

Igor Steinmacher; Igor Scaliante Wiese; Marco Aurélio Gerosa

Open Source Software projects success depends on the continuous influx of newcomers and their contributions. Newcomers play an important role as they are the potential future developers, but they face difficulties and obstacles when initiating their interaction with a project, resulting in a high amount of withdrawals. This paper presents a recommendation system aiming to support newcomers finding the most appropriate project member to mentor them in a technical task. The proposed system uses temporal and social aspects of developers behavior, in addition to recent contextual information to recommend the most suitable mentor at the moment.


predictive models in software engineering | 2014

Social metrics included in prediction models on software engineering: a mapping study

Igor Scaliante Wiese; Filipe Roseiro Côgo; Reginaldo Ré; Igor Steinmacher; Marco Aurélio Gerosa

Context: Previous work that used prediction models on Software Engineering included few social metrics as predictors, even though many researchers argue that Software Engineering is a social activity. Even when social metrics were considered, they were classified as part of other dimensions, such as process, history, or change. Moreover, few papers report the individual effects of social metrics. Thus, it is not clear yet which social metrics are used in prediction models and what are the results of their use in different contexts. Objective: To identify, characterize, and classify social metrics included in prediction models reported in the literature. Method: We conducted a mapping study (MS) using a snowballing citation analysis. We built an initial seed list adapting strings of two previous systematic reviews on software prediction models. After that, we conducted backward and forward citation analysis using the initial seed list. Finally, we visited the profile of each distinct author identified in the previous steps and contacted each author that published more than 2 papers to ask for additional candidate studies. Results: We identified 48 primary studies and 51 social metrics. We organized the metrics into nine categories, which were divided into three groups - communication, project, and commit-related. We also mapped the applications of each group of metrics, indicating their positive or negative effects. Conclusions: This mapping may support researchers and practitioners to build their prediction models considering more social metrics.


international workshop on principles of software evolution | 2013

What can commit metadata tell us about design degradation

Gustavo Ansaldi Oliva; Igor Steinmacher; Igor Scaliante Wiese; Marco Aurélio Gerosa

Design degradation has long been assessed by means of structural analyses applied on successive versions of a software system. More recently, repository mining techniques have been developed in order to uncover rich historical information of software projects. In this paper, we leverage such information and propose an approach to assess design degradation that is programming language agnostic and relies almost exclusively on commit metadata. Our approach currently focuses on the assessment of two particular design smells: rigidity and fragility. Rigidity refer to designs that are difficult to change due to ripple effects and fragility refer to designs that tend to break in different areas every time a change is performed. We conducted an evaluation of our approach in the project Apache Maven 1 and the results indicated that our approach is feasible and that the project suffered from increasing fragility.


international conference on software engineering | 2018

How modern news aggregators help development communities shape and share knowledge

Mauricio Finavaro Aniche; Christoph Treude; Igor Steinmacher; Igor Scaliante Wiese; Gustavo Pinto; Margaret-Anne D. Storey; Marco Aurélio Gerosa

Many developers rely on modern news aggregator sites such as reddit and hn to stay up to date with the latest technological developments and trends. In order to understand what motivates developers to contribute, what kind of content is shared, and how knowledge is shaped by the community, we interviewed and surveyed developers that participate on the reddit programming subreddit and we analyzed a sample of posts on both reddit and hn. We learned what kind of content is shared in these websites and developer motivations for posting, sharing, discussing, evaluating, and aggregating knowledge on these aggregators, while revealing challenges developers face in terms of how content and participant behavior is moderated. Our insights aim to improve the practices developers follow when using news aggregators, as well as guide tool makers on how to improve their tools. Our findings are also relevant to researchers that study developer communities of practice.


Journal of Systems and Software | 2017

Using contextual information to predict co-changes

Igor Scaliante Wiese; Reginaldo R; Igor Steinmacher; Rodrigo Takashi Kuroda; Gustavo Ansaldi Oliva; Christoph Treude; Marco Aurlio Gerosa

Contextual information can improve the co-change prediction, especially the precision.The proposed models outperform the association rules used as baseline model.More than one dimension was frequently selected by our classifier. Background: Co-change prediction makes developers aware of which artifacts will change together with the artifact they are working on. In the past, researchers relied on structural analysis to build prediction models. More recently, hybrid approaches relying on historical information and textual analysis have been proposed. Despite the advances in the area, software developers still do not use these approaches widely, presumably because of the number of false recommendations. We conjecture that the contextual information of software changes collected from issues, developers communication, and commit metadata captures the change patterns of software artifacts and can improve the prediction models. Objective: Our goal is to develop more accurate co-change prediction models by using contextual information from software changes. Method: We selected pairs of files based on relevant association rules and built a prediction model for each pair relying on their associated contextual information. We evaluated our approach on two open source projects, namely Apache CXF and Derby. Besides calculating model accuracy metrics, we also performed a feature selection analysis to identify the best predictors when characterizing co-changes and to reduce overfitting. Results: Our models presented low rates of false negatives (8% average rate) and false positives (11% average rate). We obtained prediction models with AUC values ranging from 0.89 to 1.00 and our models outperformed association rules, our baseline model, when we compared their precision values. Commit-related metrics were the most frequently selected ones for both projects. On average, 6 out of 23 metrics were necessary to build the classifiers. Conclusions: Prediction models based on contextual information from software changes are accurate and, consequently, they can be used to support software maintenance and evolution, warning developers when they miss relevant artifacts while performing a software change.


open source systems | 2015

An Empirical Study of the Relation Between Strong Change Coupling and Defects Using History and Social Metrics in the Apache Aries Project

Igor Scaliante Wiese; Rodrigo Takashi Kuroda; Reginaldo Ré; Gustavo Ansaldi Oliva; Marco Aurélio Gerosa

Change coupling is an implicit relationship observed when artifacts change together during software evolution. The literature leverages change coupling analysis for several purposes. For example, researchers discovered that change coupling is associated with software defects and reveals relationships between software artifacts that cannot be found by scanning code or documentation. In this paper, we empirically investigate the strongest change couplings from the Apache Aries project to characterize and identify their impact in software development. We used historical and social metrics collected from commits and issue reports to build classification models to identify strong change couplings. Historical metrics were used because change coupling is a phenomenon associated with recurrent co-changes found in the software history. In turn, social metrics were used because developers often interact with each other in issue trackers to accomplish the tasks. Our classification models showed high accuracy, with 70−99 % F-measure and 88−99 % AUC. Using the same set of metrics, we also predicted the number of future defects for the artifacts involved in strong change couplings. More specifically, we were able to predict 45.7 % of defects where these strong change couplings reoccurred in the post-release. These findings suggest that developers and projects managers should detect and monitor strong change couplings, because they can be associated with defects and tend to happen again in the subsequent release.


IEEE Latin America Transactions | 2015

Do historical metrics and developers communication aid to predict change couplings

Igor Scaliante Wiese; Rodrigo Takashi Kuroda; Gustavo Ansaldi Oliva

Developers have contributed to open-source projects by forking the code and submitting pull requests. Once a pull request is submitted, interested parties can review the set of changes, discuss potential modifications, and even push additional commits if necessary. Mining artifacts that were committed together during history of pull-requests makes it possible to infer change couplings among these artifacts. Supported by the Conways Law, whom states that “organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations”, we hypothesize that social network analysis (SNA) is able to identify strong and weak change dependencies. In this paper, we used statistical models relying on centrality, ego, and structural holes metrics computed from communication networks to predict co-changes among files included in pull requests submitted to the Ruby on Rails project. To the best of our knowledge, this is the first study to employ SNA metrics to predict change dependencies from Github projects.


mining software repositories | 2018

Understanding the usage, impact, and adoption of non-OSI approved licenses

Rômulo Manciola Meloca; Gustavo Pinto; Leonardo Baiser; Marco Mattos; Ivanilton Polato; Igor Scaliante Wiese; Daniel M. German

The software license is one of the most important non-executable pieces of any software system. However, due to its non-technical nature, developers often misuse or misunderstand software licenses. Although previous studies reported problems related to licenses clashes and inconsistencies, in this paper we shed the light on an important but yet overlooked issue: the use of non-approved open-source licenses. Such licenses claim to be open-source, but have not been formally approved by the Open Source Initiative (OSI). When a developer releases a software under a non-approved license, even if the interest is to make it open-source, the original author might not be granting the rights required by those who use the software. To uncover the reasons behind the use of non-approved licenses, we conducted a mix-method study, mining data from 657K open-source projects and their 4,367K versions, and surveying 76 developers that published some of these projects. Although 1,058,554 of the project versions employ at least one non-approved license, non-approved licenses account for 21.51% of license usage. We also observed that it is not uncommon for developers to change from a non-approved to an approved license. When asked, some developers mentioned that this transition was due to a better understanding of the disadvantages of using an non-approved license. This perspective is particularly important since developers often rely on package managers to easily and quickly get their dependencies working.


2015 Latin American Computing Conference (CLEI) | 2015

An exploratory study about the cross-project defect prediction: Impact of using different classification algorithms and a measure of performance in building predictive models

Ricardo F. P. Satin; Igor Scaliante Wiese; Reginaldo Ré

Predicting defects in software projects is a complex task, especially in the initial phases of software development because there are a few available data. The use of cross-project defect prediction is indicated in such situation because it enables to reuse data of similar projects. In order to find and group similar projects, this paper proposes the construction of cross-project prediction models using a measure of performance achieved through the application of classification algorithms. To do so, we studied the combined application of different algorithms of classification, of feature selection, and clustering data, applied to 1270 projects aiming to building different cross-project prediction models. In this study we concluded that Naive Bayes algorithm obtained the best performance, with 31.58 % of satisfactory predictions in 19 models created with its use. This proposal seems to be promise, once the local predictions considered satisfactory reached 31.58%, against 26.31 % of global predictions.

Collaboration


Dive into the Igor Scaliante Wiese's collaboration.

Top Co-Authors

Avatar

Igor Steinmacher

Federal University of Technology - Paraná

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Reginaldo Ré

Federal University of Technology - Paraná

View shared research outputs
Top Co-Authors

Avatar

Rodrigo Takashi Kuroda

Federal University of Technology - Paraná

View shared research outputs
Top Co-Authors

Avatar

Gustavo Pinto

Federal University of Pará

View shared research outputs
Top Co-Authors

Avatar

Ana Paula Chaves

Federal University of Technology - Paraná

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Douglas Nassif Roma Junior

Federal University of Technology - Paraná

View shared research outputs
Top Co-Authors

Avatar

Jefferson O. Silva

Pontifícia Universidade Católica de São Paulo

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge