Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Romina Torres is active.

Publication


Featured researches published by Romina Torres.


international conference on web services | 2011

Improving Web API Discovery by Leveraging Social Information

Romina Torres; Boris Tapia; Hern´n Astudillo

A common problem that mashup developers face is the discovery of APIs that suit their needs. This primary task becomes harder, tedious and time-consuming with the proliferation of new APIs. As humans, we learn by example, following community previous decisions when creating mashups. Most techniques do not consider at all reusing this social information. In this paper, we propose to combine current discovery techniques (exploration) with social information (exploitation). Our preliminary results show that by considering the reciprocal influence of both sources, the discovery process reveals APIs that would remain with low rank because the preferential attachment (popularity) and/or the lack of better descriptions (discovery techniques). We present a case study focusing on a public Web-based API registry.


ieee international conference on services computing | 2011

Self-Adaptive Fuzzy QoS-Driven Web Service Discovery

Romina Torres; Hern´n Astudillo; Rodrigo Salas

Due to the high proliferation of web services, selecting the best services from functional equivalent service providers have become a real challenge, where the quality of the services plays a crucial role. But quality is uncertain, therefore, several researchers have applied Fuzzy logic to address the imprecision of the quality of service (QoS) constraints. Furthermore, the service market is highly dynamic and competitive, where web services are constantly entering and exiting this market, and they are continually improving themselves due to the competition. Current fuzzy-based techniques are expert and/or consensus-based, and therefore too fragile, expensive, non-scalable and non-self-adaptive. In this paper we introduce a new methodology to support requesters in selecting Web services by automatically connecting imprecisely defined QoS constraints with overly precise service QoS offerings over the time. We address the dynamism of the market by using each time a modified fuzzy c-means module that allows providers to automatically organize themselves around the QoS levels. The advantage of our approach is that consumers can specify their QoS constraints without really knowing what are the current best quality ranges. We illustrate our approach with a case of study.


2012 Second IEEE International Workshop on Model-Driven Requirements Engineering (MoDRE) | 2012

Mitigating the obsolescence of quality specifications models in service-based systems

Romina Torres; Nelly Bencomo; Hernán Astudillo

Requirements-aware systems address the need to reason about uncertainty at runtime to support adaptation decisions, by representing quality of services (QoS) requirements for service-based systems (SBS) with precise values in run-time queryable model specification. However, current approaches do not support updating of the specification to reflect changes in the service market, like newly available services or improved QoS of existing ones. Thus, even if the specification models reflect design-time acceptable requirements they may become obsolete and miss opportunities for system improvement by self-adaptation. This articles proposes to distinguish “abstract” and “concrete” specification models: the former consists of linguistic variables (e.g. “fast”) agreed upon at design time, and the latter consists of precise numeric values (e.g. “2ms”) that are dynamically calculated at run-time, thus incorporating up-to-date QoS information. If and when freshly calculated concrete specifications are not satisfied anymore by the current service configuration, an adaptation is triggered. The approach was validated using four simulated SBS that use services from a previously published, real-world dataset; in all cases, the system was able to detect unsatisfied requirements at run-time and trigger suitable adaptations. Ongoing work focuses on policies to determine recalculation of specifications. This approach will allow engineers to build SBS that can be protected against market-caused obsolescence of their requirements specifications.


international conference of the chilean computer science society | 2010

MACOCO: A Discoverable Component Composition Framework Using a Multiagent System

Romina Torres; Hernán Astudillo; Enrique Canessa

Component-based development is a useful approach for building large, complex software systems. However, component discovery and component composition are quite complex and expensive tasks, due to the ever growing number of components in the market. This article proposes to model component providers and consumers as a multi-agent system, allowing providers to advertise their offerings and consumers to express their requirements within a shared quality model. The main contributions of this paper is to propose a testable model for component discovery process, in which software components can become active actors and seek systems that may incorporate them. A case study from the financial domain is used to illustrate the method and to discuss its benefits.


international symposium on neural networks | 2003

Robust expectation maximization learning algorithm for mixture of experts

Romina Torres; Rodrigo Salas; Héctor Allende; Claudio Moraga

Text Categorization (TC)-the assignment of predefined categories to documents of a corpus-plays an important role in a wide variety of information organization and management tasks of Information Retrieval (IR). It involves the management of a lot of information, but some of them could be noisy or irrelevant and hence, a previous feature reduction could improve the performance of the classification. In this paper we proposed a wrapper approach. This kind of approach is time-consuming and sometimes could be infeasible. But our wrapper explores a reduced number of feature subsets and also it uses Support Vector Machines (SVM) as the evaluation system; and this two properties make the wrapper fast enough to deal with large number of features present in text domains. Taking the Reuters-21578 corpus, we also compare this wrapper with the common approach for feature reduction widely applied in TC, which consists of filtering according to scoring measures.


international work conference on artificial and natural neural networks | 2009

Robust Estimation of Confidence Interval in Neural Networks applied to Time Series

Rodrigo Salas; Romina Torres; Héctor Allende; Claudio Moraga

Artificial neural networks (ANN) have been widely used in regression or predictions problems and it is usually desirable that some form of confidence bound is placed on the predicted value. A number of methods have been proposed for estimating the uncertainty associated with a value predicted by a feedforward neural network (FANN), but these methods are computationally intensive or only valid under certain assumptions, which are rarely satisfied in practice. We present the theoretical results about the construction of confidence intervals in the prediction of nonlinear time series modeled by FANN, this method is based on M-estimators that are a robust learning algorithm for parameter estimation when the data set is contaminated. The confidence interval that we propose is constructed from the study of the Inuence Function of the estimator. We demonstrate our technique on computer generated Time Series data.


iberian conference on pattern recognition and image analysis | 2003

Robust Learning Algorithm for the Mixture of Experts

Héctor Allende; Romina Torres; Rodrigo Salas; Claudio Moraga

The Mixture of Experts model (ME) is a type of modular artificial neural network (MANN) whose architecture is composed by different kinds of networks who compete to learn different aspects of the problem. This model is used when the searching space is stratified. The learning algorithm of the ME model consists in estimating the network parameters to achieve a desired performance. To estimate the parameters, some distributional assumptions are made, so the learning algorithm and, consequently, the parameters obtained depends on the distribution. But when the data is exposed to outliers the assumption is not longer valid, the model is affected and is very sensible to the data as it is showed in this work. We propose a robust learning estimator by means of the generalization of the maximum likelihood estimator called M-estimator. Finally a simulation study is shown, where the robust estimator presents a better performance than the maximum likelihood estimator (MLE).


world congress on services | 2011

Externalizing the Autopoietic Part of Software to Achieve Self-Adaptability

Romina Torres; Hern´n Astudillo

The autopietic/allopietic duality has been proposed to address the fact that Software systems must look to their own survival besides their successful mission completion. To the best of our knowledge, there are no explicit computational models to operationalize and test this distinction. This article describes a framework to add autopietic capabilities to composite Software systems, by using an external self-organized market full of service providers willing to provide services for satisfying new requirements and recovering actions. The approach is exemplified when a QoS agreement violation occurs in a composite application system.


International Journal of Computational Intelligence Systems | 2018

An architecture based on computing with words to support runtime reconfiguration decisions of service-based systems

Romina Torres; Rodrigo Salas; Nelly Bencomo; Hernán Astudillo

Service-based systems (SBSs) need to be reconfigured when there is evidence that the selected Web services configurations no further satisfy the specifications models and, thus the decision-related models will need to be updated accordingly. However, such updates need to be performed at the right pace. On the one hand, if the updates are not quickly enough, the reconfigurations that are required may not be detected due to the obsolescence of the specification models used at runtime, which were specified at design-time. On the other hand, the other extreme is to promote premature reconfiguration decisions that are based on models that may be highly sensitive to environmental fluctuations and which may affect the stability of these systems. To deal with the required trade-offs of this situation, this paper proposes the use of linguistic decision-making (LDM) models to represent specification models of SBSs and a dynamic computing-with-words (CWW) architecture to dynamically assess the models by using a multi-period multi-attribute decision making (MP-MADM) approach. The proposed solution allows systems under dynamic environments to offer improved system stability by better managing the trade-off between the potential obsolescence of the specification models, and the required dynamic sensitivity and update of these models


international conference of the chilean computer science society | 2014

VirtualMarket: Extending Chilecompra with Agent Capabilities for Identifying Providers Associativity Opportunities and Negotiate Alliance Participation

Romina Torres; Diana Biscay; Rodrigo Salas; Oscar Cornejo; Marcelo Aliquintuy; Hernán Astudillo

Mercado Pablico is a ChileCompras platform for tenders, an environment where Chilean companies offer and demand goods and services of all kinds. The main aim of this platform is to make more efficient and transparent the contracting processes. Nowadays, tenders asking more than one good and/or service are answered separately. Thus, after receiving the offers from the sellers, the buyer must select for each part of the tender the best offer. Unfortunately, this is an inefficient and non optimal process. On the one hand, buyers spend much time in building a joint contract where not necessarily they obtained the best deal and on the other hand, small providers are less likely to be selected (if they apply) when competing with bigger providers or alliances of them (which already offer most products and typically at better prices). To address these issues, we presented VirtualMarket, a multi-agent platform, which extends the ChileCompras platform representing providers as software agents (with willingness to cooperate), which, through the creation of alliances, are capable of naturally identifying associativity opportunities, when tenders are published, and negotiate preagreements which make them more competitive. In this ongoing work, we propose to use a slightly modified version of the Zeuthen Strategy in order to support providers agents during the negotiation of the terms of the alliance formation without revealing their utility functions. Our preliminary results about the feasibility and performance of our proposal are encouraging.

Collaboration


Dive into the Romina Torres's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Héctor Allende

Adolfo Ibáñez University

View shared research outputs
Top Co-Authors

Avatar

Claudio Moraga

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Enrique Canessa

Adolfo Ibáñez University

View shared research outputs
Researchain Logo
Decentralizing Knowledge