Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Diego R. Amancio is active.

Publication


Featured researches published by Diego R. Amancio.


PLOS ONE | 2014

A Systematic Comparison of Supervised Classifiers

Diego R. Amancio; Cesar H. Comin; Dalcimar Casanova; Gonzalo Travieso; Odemir Martinez Bruno; Francisco A. Rodrigues; Luciano da Fontoura Costa

Pattern recognition has been employed in a myriad of industrial, commercial and academic applications. Many techniques have been devised to tackle such a diversity of applications. Despite the long tradition of pattern recognition research, there is no technique that yields the best classification in all scenarios. Therefore, as many techniques as possible should be considered in high accuracy applications. Typical related works either focus on the performance of a given algorithm or compare various classification methods. In many occasions, however, researchers who are not experts in the field of machine learning have to deal with practical classification tasks without an in-depth knowledge about the underlying parameters. Actually, the adequate choice of classifiers and parameters in such practical circumstances constitutes a long-standing problem and is one of the subjects of the current paper. We carried out a performance study of nine well-known classifiers implemented in the Weka framework and compared the influence of the parameter configurations on the accuracy. The default configuration of parameters in Weka was found to provide near optimal performance for most cases, not including methods such as the support vector machine (SVM). In addition, the k-nearest neighbor method frequently allowed the best accuracy. In certain conditions, it was possible to improve the quality of SVM by more than 20% with respect to their default parameter configuration.


PLOS ONE | 2015

A Complex Network Approach to Stylometry.

Diego R. Amancio

Statistical methods have been widely employed to study the fundamental properties of language. In recent years, methods from complex and dynamical systems proved useful to create several language models. Despite the large amount of studies devoted to represent texts with physical models, only a limited number of studies have shown how the properties of the underlying physical systems can be employed to improve the performance of natural language processing tasks. In this paper, I address this problem by devising complex networks methods that are able to improve the performance of current statistical methods. Using a fuzzy classification strategy, I show that the topological properties extracted from texts complement the traditional textual description. In several cases, the performance obtained with hybrid approaches outperformed the results obtained when only traditional or networked methods were used. Because the proposed model is generic, the framework devised here could be straightforwardly used to study similar textual applications where the topology plays a pivotal role in the description of the interacting agents.


Physica A-statistical Mechanics and Its Applications | 2012

Structure–semantics interplay in complex networks and its effects on the predictability of similarity in texts

Diego R. Amancio; Osvaldo N. Oliveira; Luciano da Fontoura Costa

The classification of texts has become a major endeavor with so much electronic material available, for it is an essential task in several applications, including search engines and information retrieval. There are different ways to define similarity for grouping similar texts into clusters, as the concept of similarity may depend on the purpose of the task. For instance, in topic extraction similar texts mean those within the same semantic field, whereas in author recognition stylistic features should be considered. In this study, we introduce ways to classify texts employing concepts of complex networks, which may be able to capture syntactic, semantic and even pragmatic features. The interplay between various metrics of the complex networks is analyzed with three applications, namely identification of machine translation (MT) systems, evaluation of quality of machine translated texts and authorship recognition. We shall show that topological features of the networks representing texts can enhance the ability to identify MT systems in particular cases. For evaluating the quality of MT texts, on the other hand, high correlation was obtained with methods capable of capturing the semantics. This was expected because the golden standards used are themselves based on word co-occurrence. Notwithstanding, the Katz similarity, which involves semantic and structure in the comparison of texts, achieved the highest correlation with the NIST measurement, indicating that in some cases the combination of both approaches can improve the ability to quantify quality in MT. In authorship recognition, again the topological features were relevant in some contexts, though for the books and authors analyzed good results were obtained with semantic features as well. Because hybrid approaches encompassing semantic and topological features have not been extensively used, we believe that the methodology proposed here may be useful to enhance text classification considerably, as it combines well-established strategies.


International Journal of Modern Physics C | 2008

COMPLEX NETWORKS ANALYSIS OF MANUAL AND MACHINE TRANSLATIONS

Diego R. Amancio; Lucas Antiqueira; Thiago Alexandre Salgueiro Pardo; Luciano da Fontoura Costa; Osvaldo Novais Oliveira; Maria das Graças Volpe Nunes

Complex networks have been increasingly used in text analysis, including in connection with natural language processing tools, as important text features appear to be captured by the topology and dynamics of the networks. Following previous works that apply complex networks concepts to text quality measurement, summary evaluation, and author characterization, we now focus on machine translation (MT). In this paper we assess the possible representation of texts as complex networks to evaluate cross-linguistic issues inherent in manual and machine translation. We show that different quality translations generated by MT tools can be distinguished from their manual counterparts by means of metrics such as in- (ID) and out-degrees (OD), clustering coefficient (CC), and shortest paths (SP). For instance, we demonstrate that the average OD in networks of automatic translations consistently exceeds the values obtained for manual ones, and that the CC values of source texts are not preserved for manual translations, but are for good automatic translations. This probably reflects the text rearrangements humans perform during manual translation. We envisage that such findings could lead to better MT tools and automatic evaluation metrics.


New Journal of Physics | 2011

Comparing intermittency and network measurements of words and their dependence on authorship

Diego R. Amancio; Eduardo G. Altmann; Osvaldo Novais Oliveira; Luciano da Fontoura Costa

Many features of texts and languages can now be inferred from statistical analyses using concepts from complex networks and dynamical systems. In this paper, we quantify how topological properties of word co-occurrence networks and intermittency (or burstiness) in word distribution depend on the style of authors. Our database contains 40 books by eight authors who lived in the nineteenth and twentieth centuries, for which the following network measurements were obtained: the clustering coefficient, average shortest path lengths and betweenness. We found that the two factors with stronger dependence on authors were skewness in the distribution of word intermittency and the average shortest paths. Other factors such as betweenness and Zipfs law exponent show only weak dependence on authorship. Also assessed was the contribution from each measurement to authorship recognition using three machine learning methods. The best performance was about 65% accuracy upon combining complex networks and intermittency features with the nearest-neighbor algorithm of automatic authorship. From a detailed analysis of the interdependence of the various metrics, it is concluded that the methods used here are complementary for providing short- and long-scale perspectives on texts, which are useful for applications such as the identification of topical words and information retrieval.


Journal of Statistical Mechanics: Theory and Experiment | 2015

Authorship recognition via fluctuation analysis of network topology and word intermittency

Diego R. Amancio

Statistical methods have been widely employed in many practical natural language processing applications. More specifically, complex network concepts and methods from dynamical systems theory have been successfully applied to recognize stylistic patterns in written texts. Despite the large number of studies devoted to representing texts with physical models, only a few studies have assessed the relevance of attributes derived from the analysis of stylistic fluctuations. Because fluctuations represent a pivotal factor for characterizing a myriad of real systems, this study focused on the analysis of the properties of stylistic fluctuations in texts via topological analysis of complex networks and intermittency measurements. The results showed that different authors display distinct fluctuation patterns. In particular, it was found that it is possible to identify the authorship of books using the intermittency of specific words. Taken together, the results described here suggest that the patterns found in stylistic fluctuations could be used to analyze other related complex systems. Furthermore, the discovery of novel patterns related to textual stylistic fluctuations indicates that these patterns could be useful to improve the state of the art of many stylistic-based natural language processing tasks.


PLOS ONE | 2013

Probing the Statistical Properties of Unknown Texts: Application to the Voynich Manuscript

Diego R. Amancio; Eduardo G. Altmann; Diego Rybski; Osvaldo N. Oliveira; Luciano da Fontoura Costa

While the use of statistical physics methods to analyze large corpora has been useful to unveil many patterns in texts, no comprehensive investigation has been performed on the interdependence between syntactic and semantic factors. In this study we propose a framework for determining whether a text (e.g., written in an unknown alphabet) is compatible with a natural language and to which language it could belong. The approach is based on three types of statistical measurements, i.e. obtained from first-order statistics of word properties in a text, from the topology of complex networks representing texts, and from intermittency concepts where text is treated as a time series. Comparative experiments were performed with the New Testament in 15 different languages and with distinct books in English and Portuguese in order to quantify the dependency of the different measurements on the language and on the story being told in the book. The metrics found to be informative in distinguishing real texts from their shuffled versions include assortativity, degree and selectivity of words. As an illustration, we analyze an undeciphered medieval manuscript known as the Voynich Manuscript. We show that it is mostly compatible with natural languages and incompatible with random texts. We also obtain candidates for keywords of the Voynich Manuscript which could be helpful in the effort of deciphering it. Because we were able to identify statistical measurements that are more dependent on the syntax than on the semantics, the framework may also serve for text analysis in language-dependent applications.


Scientometrics | 2012

Using complex networks concepts to assess approaches for citations in scientific papers

Diego R. Amancio; Maria das Graças Volpe Nunes; Osvaldo N. Oliveira; Luciano da Fontoura Costa

The number of citations received by authors in scientific journals has become a major parameter to assess individual researchers and the journals themselves through the impact factor. A fair assessment therefore requires that the criteria for selecting references in a given manuscript should be unbiased with regard to the authors or journals cited. In this paper, we assess approaches for citations considering two recommendations for authors to follow while preparing a manuscript: (i) consider similarity of contents with the topics investigated, lest related work should be reproduced or ignored; (ii) perform a systematic search over the network of citations including seminal or very related papers. We use formalisms of complex networks for two datasets of papers from the arXiv and the Web of Science repositories to show that neither of these two criteria is fulfilled in practice. By representing the texts as complex networks we estimated a similarity index between pieces of texts and found that the list of references did not contain the most similar papers in the dataset. This was quantified by calculating a consistency index, whose maximum value is one if the references in a given paper are the most similar in the dataset. For the areas of “complex networks” and “graphenes”, the consistency index was only 0.11–0.23 and 0.10–0.25, respectively. To simulate a systematic search in the citation network, we employed a traditional random walk search (i.e. diffusion) and a random walk whose probabilities of transition are proportional to the number of the ingoing edges of the neighbours. The frequency of visits to the nodes (papers) in the network had a very small correlation with either the actual list of references in the papers or with the number of downloads from the arXiv repository. Therefore, apparently the authors and users of the repository did not follow the criterion related to a systematic search over the network of citations. Based on these results, we propose an approach that we believe is fairer for evaluating and complementing citations of a given author, effectively leading to a virtual scientometry.


EPL | 2012

Word sense disambiguation via high order of learning in complex networks

Thiago Christiano Silva; Diego R. Amancio

Complex networks have been employed to model many real systems and as a modeling tool in a myriad of applications. In this paper, we use the framework of complex networks to the problem of supervised classification in the word disambiguation task, which consists in deriving a function from the supervised (or labeled) training data of ambiguous words. Traditional supervised data classification takes into account only topological or physical features of the input data. On the other hand, the human (animal) brain performs both low- and high-level orders of learning and it has facility to identify patterns according to the semantic meaning of the input data. In this paper, we apply a hybrid technique which encompasses both types of learning in the field of word sense disambiguation and show that the high-level order of learning can really improve the accuracy rate of the model. This evidence serves to demonstrate that the internal structures formed by the words do present patterns that, generally, cannot be correctly unveiled by only traditional techniques. Finally, we exhibit the behavior of the model for different weights of the low- and high-level classifiers by plotting decision boundaries. This study helps one to better understand the effectiveness of the model.


Journal of Informetrics | 2013

On time-varying collaboration networks

Matheus P. Viana; Diego R. Amancio; Luciano da Fontoura Costa

The patterns of scientific collaboration have been frequently investigated in terms of complex networks without reference to time evolution. In the present work, we derive collaborative networks (from the arXiv repository) parameterized along time. By defining the concept of affine group, we identify several interesting trends in scientific collaboration, including the fact that the average size of the affine groups grows exponentially, while the number of authors increases as a power law. We were therefore able to identify, through extrapolation, the possible date when a single affine group is expected to emerge. Characteristic collaboration patterns were identified for each researcher, and their analysis revealed that larger affine groups tend to be less stable.

Collaboration


Dive into the Diego R. Amancio's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cesar H. Comin

Federal University of São Carlos

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

L. da F. Costa

University of São Paulo

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge