Ernestina Menasalvas Ruiz
Technical University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ernestina Menasalvas Ruiz.
Archive | 2000
María C. Fernández-Baizán; Ernestina Menasalvas Ruiz; José María Peña Sánchez
Mining information from large databases has been recognized as a key research topic in database systems. The explosive growth of databases has made neccesary to discover techniques and tools to transform the huge amount of stored data, into useful information. Rough Set Theory [17] has been applied since its very beginning to different application areas. This chapter presents an integration of Relational DataBase Management technology with Rough Sets Theory to show how the algorithms can be successfully translated into SQL and used as a powerful tool for knowledge discovery.
Lecture Notes in Computer Science | 1998
María C. Fernández-Baizán; Ernestina Menasalvas Ruiz; José María Peña Sánchez; Borja Pardo Pastrana
In this paper we outline the design of a RDBMS that will provide the user with traditional query capabilities as well as KDD queries. Our approach is not just another system which adds KDD capabilities, this design is aimed to integrate these KDD capabilities into RDBMS core. The approach also defines a generic engine of Data Mining algorithms that allows easy enhancement of system capabilities as a new algorithm is implemented.
Remote Sensing | 2015
César Antonio Ortiz Toro; Consuelo Gonzalo Martín; Ángel Mario García Pedrero; Ernestina Menasalvas Ruiz
The new generation of artificial satellites is providing a huge amount of Earth observation images whose exploitation can report invaluable benefits, both economical and environmental. However, only a small fraction of this data volume has been analyzed, mainly due to the large human resources needed for that task. In this sense, the development of unsupervised methodologies for the analysis of these images is a priority. In this work, a new unsupervised segmentation algorithm for satellite images is proposed. This algorithm is based on the rough-set theory, and it is inspired by a previous segmentation algorithm defined in the RGB color domain. The main contributions of the new algorithm are: (i) extending the original algorithm to four spectral bands; (ii) the concept of the superpixel is used in order to define the neighborhood similarity of a pixel adapted to the local characteristics of each image; (iii) and two new region merged strategies are proposed and evaluated in order to establish the final number of regions in the segmented image. The experimental results show that the proposed approach improves the results provided by the original method when both are applied to satellite images with different spectral and spatial resolutions.
Lecture Notes in Computer Science | 2002
Esther Hochsztain; Socorro Millán; Ernestina Menasalvas Ruiz
Due to the competitive environment in which we are moving, web sites need to be very attractive for visitors. In this paper we propose an approach to analyze and determine the level of affability of a web site that tends to secure users satisfaction, based on both the kind of page and the kind of user. We propose a granular approach based on the idea that a page can be considered as a set of features or factors. In fact, each of them can be perceived at different granularity levels. The proposed approach makes it possible to estimate a measure of affinity of a user for each level of each particular factor. On any particular page, each factor takes a certain level or value. The global measure of affinity for a certain page will be calculated jointly considering the levels or values that the attributes of this particular page have for each design factor.
Archive | 2014
Chris Cornelis; Marzena Kryszkiewicz; Dominik Ślȩzak; Ernestina Menasalvas Ruiz; Rafael Bello; Lin Shang
Before the advent of fuzzy and rough sets, some authors in the 1960s studied three-valued logics and pairs of sets with a meaning similar to those we can encounter nowadays in modern theories such as rough sets, decision theory and granular computing. We revise these studies using the modern terminology and making reference to the present literature. Finally, we put forward some future directions of investigation.
Lecture Notes in Computer Science | 2002
Juan Francisco Martínez Sarrías; Anita Wasilewska; Michael Hadjimichael; Covadonga Fernández; Ernestina Menasalvas Ruiz
The relational model provides simple methods for data analysis, such as query and reporting tools. Data mining systems also provide data analysis capabilities. However, there is no uniform model which handles the representation and mining of data in such a way as to provide a standardization of inputs, outputs, and processing. In this paper we present a model of a data mining operator and a data mining structure that can be implemented over a relational database management system in the same way that multidimensional structures have been implemented.
Computer Methods and Programs in Biomedicine | 2016
Alejandro Rodríguez-González; Ernestina Menasalvas Ruiz; Miguel A. Mayer Pujadas
BACKGROUND In the last few years the use of social media in medicine has grown exponentially, providing a new area of research based on the analysis and use of Web 2.0 capabilities. In addition, the use of social media in medical education is a subject of particular interest which has been addressed in several studies. One example of this application is the medical quizzes of The New England Journal of Medicine (NEJM) that regularly publishes a set of questions through their Facebook timeline. OBJECTIVE We present an approach for the automatic extraction of medical quizzes and their associated answers on a Facebook platform by means of a set of computer-based methods and algorithms. METHODS We have developed a tool for the extraction and analysis of medical quizzes stored on Facebook timeline at the NEJM Facebook page, based on a set of computer-based methods and algorithms using Java. The system is divided into two main modules: Crawler and Data retrieval. RESULTS The system was launched on December 31, 2014 and crawled through a total of 3004 valid posts and 200,081 valid comments. The first post was dated on July 23, 2009 and the last one on December 30, 2014. 285 quizzes were analyzed with 32,780 different users providing answers to the aforementioned quizzes. Of the 285 quizzes, patterns were found in 261 (91.58%). From these 261 quizzes where trends were found, we saw that users follow trends of incorrect answers in 13 quizzes and trends of correct answers in 248. CONCLUSIONS This tool is capable of automatically identifying the correct and wrong answers to a quiz provided on Facebook posts in a text format to a quiz, with a small rate of false negative cases and this approach could be applicable to the extraction and analysis of other sources after including some adaptations of the information on the Internet.New England Journal of Medicine (NEJM) is a very prestigious medical journal.NEJM Facebook page currently has more than 1.25 million of users.Medical quizzes are one of the methods to test the knowledge of future physicians.Our approach allows extracting medical quizzes published in NEJM Facebook page.This is the first study done about the content of medical quizzes in social networks. BackgroundIn the last few years the use of social media in medicine has grown exponentially, providing a new area of research based on the analysis and use of Web 2.0 capabilities. In addition, the use of social media in medical education is a subject of particular interest which has been addressed in several studies. One example of this application is the medical quizzes of The New England Journal of Medicine (NEJM) that regularly publishes a set of questions through their Facebook timeline. ObjectiveWe present an approach for the automatic extraction of medical quizzes and their associated answers on a Facebook platform by means of a set of computer-based methods and algorithms. MethodsWe have developed a tool for the extraction and analysis of medical quizzes stored on Facebook timeline at the NEJM Facebook page, based on a set of computer-based methods and algorithms using Java. The system is divided into two main modules: Crawler and Data retrieval. ResultsThe system was launched on December 31, 2014 and crawled through a total of 3004 valid posts and 200,081 valid comments. The first post was dated on July 23, 2009 and the last one on December 30, 2014. 285 quizzes were analyzed with 32,780 different users providing answers to the aforementioned quizzes. Of the 285 quizzes, patterns were found in 261 (91.58%). From these 261 quizzes where trends were found, we saw that users follow trends of incorrect answers in 13 quizzes and trends of correct answers in 248. ConclusionsThis tool is capable of automatically identifying the correct and wrong answers to a quiz provided on Facebook posts in a text format to a quiz, with a small rate of false negative cases and this approach could be applicable to the extraction and analysis of other sources after including some adaptations of the information on the Internet.
Lecture Notes in Computer Science | 2000
María C. Fernández-Baizán; Ernestina Menasalvas Ruiz; José María Peña Sánchez; Juan Francisco Martínez Sarrías; Socorro Millán
Ever since Data Mining first appeared, a considerable amount of algorithms, methods and techniques have been developed. As a result of research, most of these algorithms have proved to be more effective and efficient. For solving problems different algorithms are often compared. However, algorithms that use different approaches are not very often applied jointly to obtain better results. An approach based on the joining of a predictive model (rough sets) together with a link analysis model (the Apriori algorithm) is presented in this paper.
bioRxiv | 2018
Eduardo P. Garcia del Valle; Gerardo Lagunes Garcia; Lucia Prieto Santamaria; Massimiliano Zanin; Ernestina Menasalvas Ruiz; Alejandro Rodríguez González
Over a decade ago, a new discipline called network medicine emerged as an approach to understand human diseases from a network theory point-of-view. Disease networks proved to be an intuitive and powerful way to reveal hidden connections among apparently unconnected biomedical entities such as diseases, physiological processes, signaling pathways, and genes. One of the fields that has benefited most from this improvement is the identification of new opportunities for the use of old drugs, known as drug repurposing. The importance of drug repurposing lies in the high costs and the prolonged time from target selection to regulatory approval of traditional drug development. In this document we analyze the evolution of disease network concept during the last decade and apply a data science pipeline approach to evaluate their functional units. As a result of this analysis, we obtain a list of the most commonly used functional units and the challenges that remain to be solved. This information can be very valuable for the generation of new prediction models based on disease networks.
bioRxiv | 2018
Gerardo Lagunes Garcia; Alejandro Rodríguez González; Lucia Prieto Santamaria; Eduardo P. Garcia del Valle; Massimiliano Zanin; Ernestina Menasalvas Ruiz
Abstract Within the global endeavour of improving population health, one major challenge is the increasingly high cost associated with drug development. Drug repositioning, i.e. finding new uses for existing drugs, is a promising alternative; yet, its effectiveness has hitherto been hindered by our limited knowledge about diseases and their relationships. In this paper, we present DISNET (disnet.ctb.upm.es), a web-based system designed to extract knowledge from signs and symptoms retrieved from medical databases, and to enable the creation of customisable disease networks. We here present the main features of the DISNET system. We describe how information on diseases and their phenotypic manifestations is extracted from Wikipedia, PubMed and Mayo Clinic; specifically, texts from these sources are processed through a combination of text mining and natural language processing techniques. We further present a validation of the processing performed by the system; and describe, with some simple use cases, how a user can interact with it and extract information that could be used for subsequent analyses.Within the global endeavour of improving population health, one major challenge is the increasingly high cost associated with drug development. Drug repositioning, i.e. finding new uses for existing drugs, is a promising alternative; yet, its effectiveness has hitherto been hindered by our limited knowledge about diseases and their relationships. In this paper we present DISNET (Drug repositioning and disease understanding through complex networks creation and analysis), a web-based system designed to extract knowledge from signs and symptoms retrieved from medical data bases, and to enable the creation of customisable disease networks. We here present the main functionalities of the DISNET system. We describe how information on diseases and their phenotypic manifestations is extracted from Wikipedia, PubMed and MayoClinic; specifically, texts from these sources are processed through a combination of text mining and natural language processing techniques. We further present a validation of the processing performed by the system; and describe, with some simple use cases, how a user can interact with it and extract information that could be used for subsequent analyses. Database URL: http://disnet.ctb.upm.es