Solange Oliveira Rezende
University of São Paulo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Solange Oliveira Rezende.
Archive | 2006
Jaime Simão Sichman; Helder Coelho; Solange Oliveira Rezende
Invited Speakers.- Organizing Software Agents.- Learning, Logic, and Probability: A Unified View.- Reinventing Machine Learning with ROC Analysis.- Cocktail Party Processing.- AI in Education and Intelligent Tutoring Systems.- Diagnostic of Programs for Programming Learning Tools.- Intelligent Learning Objects: An Agent Approach to Create Reusable Intelligent Learning Environments with Learning Objects.- An Experimental Study of Effective Feedback Strategies for Intelligent Tutorial Systems for Foreign Language.- Autonomous Agents and Multiagent Systems.- Coordination with Collective and Individual Decisions.- Negotiator Agents for the Patrolling Task.- Running Agents in Mobile Devices.- A Multi Agent Based Simulator for Brazilian Wholesale Electricity Energy Market.- Using IDEF0 to Enhance Functional Analysis in OISE?+? Organizational Modeling.- Simulations Show That Shame Drives Social Cohesion.- SILENT AGENTS: From Observation to Tacit Communication.- Simulating Working Environments Through the Use of Personality-Based Agents.- GAPatrol: An Evolutionary Multiagent Approach for the Automatic Definition of Hotspots and Patrol Routes.- Learning by Knowledge Sharing in Autonomous Intelligent Systems.- Formal Analysis of a Probabilistic Knowledge Communication Framework.- Computer Vision and Pattern Recognition.- Color Image Segmentation Through Unsupervised Gaussian Mixture Models.- An Image Analysis Methodology Based on Deterministic Tourist Walks.- Feature Characterization in Iris Recognition with Stochastic Autoregressive Models.- Cryptographic Keys Generation Using FingerCodes.- Evolutionary Computation and Artificial Life.- Using Computational Intelligence and Parallelism to Solve an Industrial Design Problem.- Two-Phase GA-Based Model to Learn Generalized Hyper-heuristics for the 2D-Cutting Stock Problem.- Mirrored Traveling Tournament Problem: An Evolutionary Approach.- Pattern Sequencing Problems by Clustering Search.- Hybrid Systems (Fuzzy, Genetic, Neural, Symbolic).- Development of a Hybrid Intelligent System for Electrical Load Forecasting.- Extending a Hybrid CBR-ANN Model by Modeling Predictive Attributes Using Fuzzy Sets.- Development of a Neural Sensor for On-Line Prediction of Coagulant Dosage in a Potable Water Treatment Plant in the Way of Its Diagnosis.- Multi-objective Memetic Algorithm Applied to the Automated Synthesis of Analog Circuits.- A Hybrid Learning Strategy for Discovery of Policies of Action.- Knowledge Acquisition and Machine Learning.- A Fractal Dimension Based Filter Algorithm to Select Features for Supervised Learning.- Comparing Meta-learning Algorithms.- A New Linear Dimensionality Reduction Technique Based on Chernoff Distance.- A Machine Learning Approach to the Identification of Appositives.- Parameterized Imprecise Classification: Elicitation and Assessment.- Evolutionary Training of SVM for Multiple Category Classification Problems with Self-adaptive Parameters.- Time-Space Ensemble Strategies for Automatic Music Genre Classification.- Predictive and Descriptive Approaches to Learning Game Rules from Vision Data.- Knowledge Discovery and Data Mining.- Mining Intonation Corpora Using Knowledge Driven Sequential Clustering.- Using Common Sense to Recognize Cultural Differences.- Detection of Repetitive Patterns in Action Sequences with Noise in Programming by Demonstration.- Knowledge Engineering, Ontologies and Case Based Reasoning.- Supporting Ontology-Based Semantic Matching of Web Services in MoviLog.- Learning Similarity Metrics from Case Solution Similarity.- Knowledge Representation and Reasoning.- Epistemic Actions and Ontic Actions: A Unified Logical Framework.- Strings and Holes: An Exercise on Spatial Reasoning.- A Causal Perspective to Qualitative Spatial Reasoning in the Situation Calculus.- PFORTE: Revising Probabilistic FOL Theories.- Rule Schemata for Game Artificial Intelligence.- Neutral Language Processing.- Selecting a Feature Set to Summarize Texts in Brazilian Portuguese.- Word Sense Disambiguation Based on Word Sense Clustering.- Comparing Two Markov Methods for Part-of-Speech Tagging of Portuguese.- Shallow Parsing Based on Comma Values.- Planning and Scheduling.- Unifying Nondeterministic and Probabilistic Planning Through Imprecise Markov Decision Processes.- Achieving Conditional Plans Through the Use of Classical Planning Algorithms.- Assessing the Value of Future and Present Options in Real-Time Planning.- Reading PDDL, Writing an Object-Oriented Model.- Robotics.- A Reactive Lazy PRM Approach for Nonholonomic Motion Planning.- Negative Information in Cooperative Multirobot Localization.- Gait Control Generation for Physically Based Simulated Robots Using Genetic Algorithms.- Does Complex Learning Require Complex Connectivity?.- Theoretical and Logical Methods.- The Predicate-Minimizing Logic MIN.- Strong Negation in Well-Founded and Partial Stable Semantics for Logic Programs.- MAT Logic: A TemporalxModal Logic with Non-deterministic Operators to Deal with Interactive Systems in Communication Technologies.- Uncertainty.- Probabilistic Logic with Strong Independence.- Bayesian Model Combination and Its Application to Cervical Cancer Detection.
international conference on neural information processing | 2002
Katti Faceli; A. de Carvalho; Solange Oliveira Rezende
Mobile robots rely on sensor data to build a representation of their environment. However, sensors usually provide incomplete, inconsistent or inaccurate information. Sensor fusion has been successfully employed to enhance the accuracy of sensor measures. This work proposes and investigates the use of Artificial Intelligence techniques for sensor fusion. Its main goal is to improve the accuracy and reliability of the distance measure between a robot and an object in its work environment, based on measures obtained from different sensors. Several Machine Learning algorithms are investigated to fuse the sensors data. The best model generated by each algorithm is called estimator. It is shown that the employment of estimators based on Artificial Intelligence can improve significantly the performance achieved by each sensor alone. The Machine Learning algorithms employed have different characteristics, causing the estimators to have different behaviors in different situations. Aiming to achieve an even more accurate and reliable behavior, the estimators are combined in committees. The results obtained suggest that this combination can further improve the reliability and accuracy of the distances measured by the individual sensors and estimators used for sensor fusion.
Journal of Computer Science and Technology | 2014
Rafael Geraldeli Rossi; Alneu de Andrade Lopes; Thiago de Paulo Faleiros; Solange Oliveira Rezende
Algorithms for numeric data classification have been applied for text classification. Usually the vector space model is used to represent text collections. The characteristics of this representation such as sparsity and high dimensionality sometimes impair the quality of general-purpose classifiers. Networks can be used to represent text collections, avoiding the high sparsity and allowing to model relationships among different objects that compose a text collection. Such network-based representations can improve the quality of the classification results. One of the simplest ways to represent textual collections by a network is through a bipartite heterogeneous network, which is composed of objects that represent the documents connected to objects that represent the terms. Heterogeneous bipartite networks do not require computation of similarities or relations among the objects and can be used to model any type of text collection. Due to the advantages of representing text collections through bipartite heterogeneous networks, in this article we present a text classifier which builds a classification model using the structure of a bipartite heterogeneous network. Such an algorithm, referred to as IMBHN (Inductive Model Based on Bipartite Heterogeneous Network), induces a classification model assigning weights to objects that represent the terms for each class of the text collection. An empirical evaluation using a large amount of text collections from different domains shows that the proposed IMBHN algorithm produces significantly better results than k-NN, C4.5, SVM, and Naive Bayes algorithms.
Information Processing and Management | 2016
Rafael Geraldeli Rossi; Alneu de Andrade Lopes; Solange Oliveira Rezende
Scalable algorithm based on bipartite networks to perform transduction.Unlabeled data effectively employed to improve classification performance.Better performance than algorithms based on vector space model or networks.Rigorous evaluation to show the drawbacks of the existing transductive algorithms.Trade-off analysis between inductive supervised and transductive classification. Transductive classification is a useful way to classify texts when labeled training examples are insufficient. Several algorithms to perform transductive classification considering text collections represented in a vector space model have been proposed. However, the use of these algorithms is unfeasible in practical applications due to the independence assumption among instances or terms and the drawbacks of these algorithms. Network-based algorithms come up to avoid the drawbacks of the algorithms based on vector space model and to improve transductive classification. Networks are mostly used for label propagation, in which some labeled objects propagate their labels to other objects through the network connections. Bipartite networks are useful to represent text collections as networks and perform label propagation. The generation of this type of network avoids requirements such as collections with hyperlinks or citations, computation of similarities among all texts in the collection, as well as the setup of a number of parameters. In a bipartite heterogeneous network, objects correspond to documents and terms, and the connections are given by the occurrences of terms in documents. The label propagation is performed from documents to terms and then from terms to documents iteratively. Nevertheless, instead of using terms just as means of label propagation, in this article we propose the use of the bipartite network structure to define the relevance scores of terms for classes through an optimization process and then propagate these relevance scores to define labels for unlabeled documents. The new document labels are used to redefine the relevance scores of terms which consequently redefine the labels of unlabeled documents in an iterative process. We demonstrated that the proposed approach surpasses the algorithms for transductive classification based on vector space model or networks. Moreover, we demonstrated that the proposed algorithm effectively makes use of unlabeled documents to improve classification and it is faster than other transductive algorithms.
international conference on data mining | 2012
Rafael Geraldeli Rossi; Thiago de Paulo Faleiros; Alneu de Andrade Lopes; Solange Oliveira Rezende
Usually, algorithms for categorization of numeric data have been applied for text categorization after a preprocessing phase which assigns weights for textual terms deemed as attributes. However, due to characteristics of textual data, some algorithms for data categorization are not efficient for text categorization. Characteristics of textual data such as sparsity and high dimensionality sometimes impair the quality of general purpose classifiers. Here, we propose a text classifier based on a bipartite heterogeneous network used to represent textual document collections. Such algorithm induces a classification model assigning weights to objects that represents terms of the textual document collection. The induced weights correspond to the influence of the terms in the classification of documents they appear. The least-mean-square algorithm is used in the inductive process. Empirical evaluation using a large amount of textual document collections shows that the proposed IMBHN algorithm produces significantly better results than the k-NN, C4.5, SVM and Naïve Bayes algorithms.
acm symposium on applied computing | 2014
Rafael Geraldeli Rossi; Alneu de Andrade Lopes; Solange Oliveira Rezende
A bipartite heterogeneous network is one of the simplest ways to represent a textual document collection. In such case, the network consists of two types of vertices, representing documents and terms, and links connecting terms to the documents. Transductive algorithms are usually applied to perform classification of networked objects. This type of classification is usually applied when few labeled examples are available, which may be worthwhile for practical situations. Nevertheless, for existing transductive algorithms users have to set several parameters that significantly affect the classification accuracy. In this paper, we propose a parameter-free algorithm for transductive classification of textual data, referred to as LPBHN (Label Propagation using Bipartite Heterogeneous Networks). LPBHN uses a bipartite heterogeneous network to perform the classification task. The proposed algorithm presents accuracy equivalent or higher than state-of-the-art algorithms for transductive classification in heterogeneous or homogeneous networks.
Engineering Management Journal | 2013
Janaina Mascarenhas Hornos da Costa; Henrique Rozenfeld; Creusa Sayuri Tahara Amaral; Ricardo M. Marcacinit; Solange Oliveira Rezende
Abstract: One of the ways to improve the New Product Development (NPD) process is to eliminate the problems that arise over years of practice. This article describes the systematization of recurrent NPD management problems. The main NPD problems were found to be recurrent; hence, the systematization resulting from this study allows for the identification of NPD areas requiring special attention from both practitioners and researchers. This identification enables researchers to define new areas of academic research, and practitioners to focus on specific improvement projects. Eight case studies were conducted, in which 124 NPD personnel were interviewed, involving the diagnosis of the NPD process and the identification and selection of NPD improvement projects. The diagnostic method applied was the Current Reality Tree (CRT), which is a cognitive method for identifying undesirable effects (problems) in a process. Text mining techniques were then applied to identify similarities among these CRTs. Lastly, NPD categories were created to classify the NPD problems. An analysis of the rate of problems per category underlined the importance of diagnosing the NPD process. It was concluded that process and project management are just as critical as product strategy definition and human resource management. Additionally, we concluded that companies would gain greater benefits by focusing on the aforementioned areas before investing in information and communication technology. Potential pitfalls of NPD may be avoided if companies adopt proactive management actions to mitigate their NPD recurrent problems based on the results presented here.
document engineering | 2013
Ricardo Marcondes Marcacini; Solange Oliveira Rezende
In many text clustering tasks, there is some valuable knowledge about the problem domain, in addition to the original textual data involved in the clustering process. Traditional text clustering methods are unable to incorporate such additional (privileged) information into data clustering. Recently, a new paradigm called LUPI - Learning Using Privileged Information - was proposed by Vapnik to incorporate privileged information in classification tasks. In this paper, we extend the LUPI paradigm to deal with text clustering tasks. In particular, we show that the LUPI paradigm is potentially promising for incremental hierarchical text clustering, being very useful for organizing large textual databases. In our method, the privileged information about the text documents is applied to refine an initial clustering model by means of consensus clustering. The initial model is used for incremental clustering of the remaining text documents. We carried out an experimental evaluation on two benchmark text collections and the results showed that our method significantly improves the clustering accuracy when compared to a traditional hierarchical clustering method.
Journal of the Brazilian Computer Society | 2014
Merley da Silva Conrado; Ariani Di Felippo; Thiago Alexandre Salgueiro Pardo; Solange Oliveira Rezende
BackgroundTerm extraction is highly relevant as it is the basis for several tasks, such as the building of dictionaries, taxonomies, and ontologies, as well as the translation and organization of text data.Methods and ResultsIn this paper, we present a survey of the state of the art in automatic term extraction (ATE) for the Brazilian Portuguese language. In this sense, the main contributions and projects related to such task have been classified according to the knowledge they use: statistical, linguistic, and hybrid (statistical and linguistic). We also present a study/review of the corpora used in the term extraction in Brazilian Portuguese, as well as a geographic mapping of Brazil regarding such contributions, projects, and corpora, considering their origins.ConclusionsIn spite of the importance of the ATE, there are still several gaps to be filled, for instance, the lack of consensus regarding the formal definition of meaning of ‘term’. Such gaps are larger for the Brazilian Portuguese when compared to other languages, such as English, Spanish, and French. Examples of gaps for Brazilian Portuguese include the lack of a baseline ATE system, as well as the use of more sophisticated linguistic information, such as the WordNet and Wikipedia knowledge bases. Nevertheless, there is an increase in the number of contributions related to ATE and an interesting tendency to use contrasting corpora and domain stoplists, even though most contributions only use frequency, noun phrases, and morphosyntactic patterns.
acm symposium on applied computing | 2012
Bruno M. Nogueira; Alípio Mário Jorge; Solange Oliveira Rezende
In this paper, we address the problem of semi-supervised hierarchical clustering by using an active clustering solution with cluster-level constraints. This active learning approach is based on a concept of merge confidence in agglomerative clustering. The proposed method was compared with an un-supervised algorithm (average-link) and a semi-supervised algorithm based on pairwise constraints. The results show that our algorithm tends to be better than the pairwise constrained algorithm and can achieve a significant improvement when compared to the unsupervised algorithm.