Angelina Espinoza
Technical University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Angelina Espinoza.
Innovations in Systems and Software Engineering | 2011
Angelina Espinoza; Juan Garbajosa
Traceability is recognized to be important for supporting agile development processes. However, after analyzing many of the existing traceability approaches it can be concluded that they strongly depend on traditional development process characteristics. Within this paper it is justified that this is a drawback to support adequately agile processes. As it is discussed, some concepts do not have the same semantics for traditional and agile methodologies. This paper proposes three features that traceability models should support to be less dependent on a specific development process: (1) user-definable traceability links, (2) roles, and (3) linkage rules. To present how these features can be applied, an emerging traceability metamodel (TmM) will be used within this paper. TmM supports the definition of traceability methodologies adapted to the needs of each project. As it is shown, after introducing these three features into traceability models, two main advantages are obtained: 1) the support they can provide to agile process stakeholders is significantly more extensive, and 2) it will be possible to achieve a higher degree of automation. In this sense it will be feasible to have a methodical trace acquisition and maintenance process adapted to agile processes.
international conference on industrial informatics | 2011
Mariano Ortega de Mues; Alejandro Alvarez; Angelina Espinoza; Juan Garbajosa
Given the challenges posed by the smart grid paradigm, we propose hereby a distributed intelligent ICT architecture, which is built upon a basic brick named PGDIN (Power Grid Distributed Inteligence Node) integrating a DDS middleware, an ESP/CEP engine, a distributed cache, and an OWL unified model for power standards. A case study dealing with substation monitor and control is also introduced.
IEEE Transactions on Industrial Informatics | 2013
Angelina Espinoza; Yoseba K. Penya; Juan Carlos Nieves; Mariano Ortega; Aitor Peña; Daniel Rodriguez
This paper presents an application of business intelligence (BI) for electricity management systems in the context of the Smart Grid domain. Combining semantic Web technologies (SWT) and elements of grid computing (GC), we have designed a distributed architecture of intelligent nodes, which are called power grid distributed nodes (PGDINs). This distributed architecture supports the majority of the grid management activities in an intelligent and collaborative way by means of distributed processing of semantic data. A node collaborative scheme is defined based on logical states that each node presents according to the events occurring in the grid. A specific BPEL business-workflow is formally defined for each logical state, based on the nodes knowledge base (an electrical model) and the distributed data. The introduced core workflows allow the potential grid behavior to be predefined when a business requirement is triggered. Thus, this approach supports the grid to react and reach over again a stable state, which is defined as a working state that facilitates the provision of the required business tasks. We have validated our approach with the simulation of a well-known use case, the energy balancing verification, fed with real data from the Spanish electrical grid.
Engineering Applications of Artificial Intelligence | 2013
Juan Carlos Nieves; Angelina Espinoza; Yoseba K. Penya; Mariano Ortega de Mues; Aitor Peña
The smart grid vision demands both syntactic interoperability in order to physically be able to interchange data and semantic interoperability to properly understand and interpret its meaning. The IEC and the EPRI have backed to this end the harmonization of two widely used industrial standards, the CIM and the IEC 61850, as the global unified ontology in the smart grid scenario. Still, persisting such a huge general ontology in each and every one of the members of a distributed system is neither practical nor feasible. Moreover, the smart grid will be a heterogeneous conglomerate of legacy and upcoming architectures that will require first the possibility of representing all the existing assets in the power network as well as new unknown ones, and second, the collaboration of different entities of the system in order to deploy complex activities. Finally, the smart grid presents diverse time span requirements, such as real-time, and all of them must be addressed efficiently but use resources sparingly. Against this background, we put forward an architecture of intelligent nodes spread all over the smart grid structure. Each intelligent node only has a profile of the global ontology. Moreover, adding reasoning abilities, we achieve simultaneously the required intelligence distribution and local decision making. Furthermore, we address the aforementioned real-time and quasi-real-time requirements by integrating stream data processing tools within the intelligent node. Combined with the knowledge base profile and the reasoning capability, our intelligent architecture supports semantic stream reasoning. We have illustrated the feasibility of this approach with a prototype composed of three substations and the description of several complex activities involving a number of different entities of the smart grid. Moreover, we have also addressed the potential extension of the unified ontology.
international conference on industrial informatics | 2011
Angelina Espinoza; Mariano Ortega; Carlos Angel Iglesias Fernandez; Juan Garbajosa; Alejandro Alvarez
Smart Grid can be thought as integrating the electrical and information technologies in between any point of generation and any point of consumption of the grid. This is a challenging context for the software-intensive systems of the grid, since interoperability among them become a fundamental issue to communicate and interchange data (syntactic interoperability). More than supporting just communication among systems, the interpretation of the interchanged data (semantic interoperability) is a requirement to identify deviations in the grid in time slots as the strict real-time. This paper approach defines a semantic framework which includes 1) a semantic model defined in a powerful ontology language and 2) the proper processing components as the means for interpreting data and for inferencing implicit knowledge. In addition, these both parts need to be integrated into the grid node design for supporting the systems interoperability in all localization and decision levels of the Smart Grid. A case study has been performed to show the feasibility of this approach in the context of an ambitious Smart Grid project and the Spanish electrical grid.
international conference on industrial informatics | 2011
Juan Carlos Nieves; Mariano Ortega de Mues; Angelina Espinoza; Daniel Rodriguez-Alvarez
According to the Electric Power Research Institute (EPRI) a common semantics model is necessary for achieving interoperability in the Smart Grid vision. In this paper, we present an outline of two influential International Electrotech-nical Commission Standards (CIM and IEC 61850) for building a common semantic model in a Smart Grid vision. In addition, we revise two representative approaches suggested by EPRI for harmonizing these standards in a common semantic model. The pros and cons between these two approaches are identified and analyzed.
product focused software process improvement | 2012
Oscar Castro; Angelina Espinoza; Alfonso Martínez-Martínez
Nowadays software companies are facing a fierce competition to deliver better products but offering a higher value to the customer. In this context, software product value has becoming a major concern in software industry, leading for improving the knowledge and better understanding about how to estimate the software value in early development phases. Other way, software companies encounter problems such as releasing products that were developed with high expectations, but they gradually fall into the category of a mediocre product when they are released to the market. These high expectations are tightly related to the expected and offered software value to the customer. This paper presents an approach for estimating the software product value, focusing on the development phases. We propose a value indicators approach to quantify the real value of the development products. The aim is early identifying potential deviations in the real software value, by comparing the estimated versus the expected. We present an internal validation to show the feasibility of this approach to produce benefits in industry projects.
acm symposium on applied computing | 2010
Angelina Espinoza; Goetz Botterweck; Juan Garbajosa
Software Product Line (SPL) Engineering has to deal with interrelated, complex models such as feature and architecture models, hence traceability is fundamental to keep them consistent. Commonly, a traceability schema must be started from scratch from project to project. To avoid that, useful traceability practices to solve day to day problems should be modeled explicitly and kept as part of the traceability knowledge gained, and then organizations can reduce time and effort in implementing traceability in new projects. This paper presents an approach for formalizing and reusing traceability practices in SPL Engineering. Using this formalization approach a traceability metamodel is defined, incorporating the particular traceability practices performed in SPL Engineering. Customized traceability methodologies for SPL projects will be systematically and formally generated from this metamodel. These resulting methodologies will have already incorporated the traceability knowledge proven as successful in previous projects, facilitating the reuse of such practices. In this paper, we focus specifically on the product derivation process, to show the advantages of this formalization approach to reuse traceability knowledge.
symposium on applied computing | 2017
Nour Ali; Alfonso Martínez-Martínez; Lorena Ayuso-Pérez; Angelina Espinoza
Legacy systems need to be continuously maintained and re-engineered to improve their provision of services and improve quality attributes. An approach that promises to improve quality attributes and reduce human maintenance tasks is the self-adaptive approach, where software systems modify their own behaviour. However, there is little guidance in the literature on how to migrate to a self-adaptive system and evaluate which features should be designed/implemented with self-adaptive behaviour. In this paper, we describe a process called Self-Adaptive Quality Requirement Elicitation Process (SAQEP), a process that allows eliciting quality attribute requirements from legacy system stakeholders and specify which of these requirements can be taken account to be implemented in a self-adaptation system. The SAQEP has been applied to elicit the self-adaptive quality requirements of a legacy system in a Mexican hospital. We also discuss our experience applying this approach.
international conference on agile software development | 2011
Angelina Espinoza; Richard F. Paige; Juan Garbajosa
Traceability defines and maintains relationships among the artifacts involved in the software-engineering life cycle. This mechanism is widely used for purposes as project visibility and accountability, and it provides an essential support for developing high-quality software systems. Traceability has traditionally been strongly influenced by software process drivers; however, during the last few years, the software process is starting to take into account new drivers.One such driver is the software product’s value. The global value of a product can be understood as the perceived benefits in relation to its cost. Both business and technical areas of an organization consider the value concept as a priority, for analyzing software costs and benefits. For these purposes, a tight interaction between these two sides is fundamental.