Mariano Rico
Autonomous University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mariano Rico.
Lecture Notes in Computer Science | 2004
Pablo Castells; Borja Foncillas; Rubén Lara; Mariano Rico; Juan Luis Alonso
The field of economy and finance is a conceptually rich domain where information is complex, huge in volume and a highly valuable business product by itself. Novel management techniques are required for economic and financial information in order to enable an efficient generation, management and consumption of complex and big information resources. Following this direction, we have developed and ontology-based platform that provides a) the integration of contents and semantics in a knowledge base that provides a conceptual view on low-level contents, b) an adaptive hypermedia-based knowledge visualization and navigation system and c) semantic search facilities. We have developed, as the basis of this platform, an ontology for the domain of economic and financial information.
Lecture Notes in Computer Science | 2004
Pablo Castells; Ferran Perdrix; Estrella Pulido; Mariano Rico; V. Richard Benjamins; Jesús Contreras; Jesús Lorés
Newspaper archives are a fundamental working tool for editorial teams. Their exploitation in digital format through the web, and the provision of technology to make this possible, are also important businesses today. The volume of archive contents, and the complexity of human teams that create and maintain them, give rise to diverse management difficulties. We propose the introduction of the emergent semantic-based technologies to improve the processes of creation, maintenance, and exploitation of the digital archive of a newspaper. We describe a platform based on these technologies, that consists of a) a knowledge base associated to the newspaper archive, based on an ontology for the description of journalistic information, b) a semantic search module, and c) a module for content browsing and visualisation based on ontologies.
Archive | 2006
Pablo Castells; Ferran Perdrix; Estrella Pulido; Mariano Rico; José María Fuentes; R. Benjamins; Jesús Contreras; E. Piqué; J. Cal; Jesús Lorés; Toni Granollers
The introduction of information technologies in the news industry has marked a new evolutionary cycle in the journalistic activity. The creation of new infrastructures, protocols and exchange standards for the automatic or on-demand distribution and/or sale of information packages through dif-ferent channels and transmission formats has deeply transformed the way in which news industry players communicate with each other. One inter-esting consequence of this technological transformation has been the emergence, in very few years, of a whole new market of online services for archive news redistribution, syndication, aggregation, and brokering. Newspaper archives are a highly valuable information asset for the widest range of information consumer profiles: students, researchers, historians, business professionals, the general public, and not the least, news writers themselves. Providing technology for news archive construction, manage-ment, access, publication, and billing, is an important business nowadays. The information collected from everyday news is huge in volume, very loosely organized, and grows without a global a-priori structure. This ever-growing corpus of archived news results from the coordinated but to much extent autonomous work of a team of reporters, whose primary goal is not to build an archive, but to serve the best possible information product for
Proceedings of the 16th Conference of the Spanish Association for Artificial Intelligence on Advances in Artificial Intelligence - Volume 9422 | 2015
Nandana Mihindukulasooriya; Mariano Rico; Raúl García-Castro; Asunción Gómez-Pérez
DBpedia exposes data from Wikipedia as machine-readable Linked Data. The DBpedia data extraction process generates RDF data in two ways; a using the mappings that map the data from Wikipedia infoboxes to the DBpedia ontology and other vocabularies, and b using infobox-properties, i.e., properties that are not defined in the DBpedia ontology but are auto-generated using the infobox attribute-value pairs. The work presented in this paper inspects the quality issues of the properties used in the Spanish DBpedia dataset according to conciseness, consistency, syntactic validity, and semantic accuracy quality dimensions. The main contribution of the paper is the identification of quality issues in the Spanish DBpedia and the possible causes of their existence. The findings presented in this paper can be used as feedback to improve the DBpedia extraction process in order to eliminate such quality issues from DBpedia.
Journal of Information Science and Engineering | 2010
Mariano Rico; Francisco García-Sánchez; Juan Miguel Gómez; Rafael Valencia-García; Jesualdo Tomás Fernández-Breis
The Web has changed from a mere repository of information to a new platform for business transactions where organizations deploy, share and expose business processes via Web services. New promising application fields such as the Semantic Web and Semantic Web Services are leveraging the potential of deploying those services, but face the problem of discovering and invoking them in a simple way for common users. GGODO is an experimental solution that combines natural language analysis and semantically-empowered techniques to let users express their goals in a guided way, which produces better results than previous non guided tools.
intelligent distributed computing | 2008
Mariano Rico; David Camacho; Oscar Corcho
This paper describes a distributed collaborative wiki-based platform that has been designed to facilitate the development of Semantic Web applications. The applications designed using this platform are able to build semantic data through the cooperation of different developers and to exploit that semantic data. The paper shows a practical case study on the application VPOET, and how an application based on Google Gadgets has been designed to test VPOET and let human users exploit the semantic data created. This practical example can be used to show how different Semantic Web technologies can be integrated into a particular Web application, and how the knowledge can be cooperatively improved.
International Journal on Semantic Web and Information Systems | 2010
David Camacho; Mariano Rico; Oscar Corcho; José A. Macías
Current web application development requires highly qualified staff, dealing with an extensive number of architectures and technologies. When these applications incorporate semantic data, the list of skill requirements becomes even larger, leading to a high adoption barrier for the development of semantically enabled Web applications. This paper describes VPOET, a tool focused mainly on two types of users: web designers and web application developers. By using this tool, web designers do not need specific skills in semantic web technologies to create web templates to handle semantic data. Web application developers incorporate those templates into their web applications, by means of a simple mechanism based in HTTP messages. End-users can use these templates through a Google Gadget. As web designers play a key role in the system, an experimental evaluation has been conducted, showing that VPOET provides good usability features for a representative group of web designers in a wide range of competencies in client-side technologies, ranging from amateur HTML developers to professional web designers.
database and expert systems applications | 2009
Mariano Rico; David Camacho; Oscar Corcho
This paper shows a semantically-enabled web application named MIG used to create user profiles which enhances users accessibility by allowing the creation of an user interface adapted to the user needs, the device used, and its preferences. This approach exploits the Semantic Web technologies and the infrastructure and applications created in previous work.
international work-conference on the interplay between natural and artificial computation | 2007
Juan Miguel Gómez; Mariano Rico; Francisco García-Sánchez; Ying Liu; Marília Terra de Mello
Biomedical research is now information intensive; the volume and diversity of new data sources challenges current database technologies. The development and tuning of database technologies for biology and medicine will maintain and accelerate the current pace for innovation and discovery. New promising application fields such as the Semantic Web and Semantic Web Services can leverage the potential of biomedical information integration and discovery, facing the problem of semantic heterogeneity of biomedical information sources in a variety of storage and data formats widely distributed both across the Internet and within individual organizations. In this paper, we present BIRD, a fully-fledged biomedical information integration solution that combines natural language analysis and semantically-empowered techniques to ascertain how the user needs can be best fit. Our approach is backed with a proof-of-concept implementation where the breakthrough and efficiency of integrating the biomedical publications database PubMed, the Database of Interacting Proteins (DIP) and the Munich Information Center for Protein Sequences (MIPS) has been tested.
acm symposium on applied computing | 2018
Mariano Rico; Nandana Mihindukulasooriya; Dimitris Kontokostas; Heiko Paulheim; Sebastian Hellmann; Asunción Gómez-Pérez
DBpedia releases consist of more than 70 multilingual datasets that cover data extracted from different language-specific Wikipedia instances. The data extracted from those Wikipedia instances are transformed into RDF using mappings created by the DBpedia community. Nevertheless, not all the mappings are correct and consistent across all the distinct language-specific DBpedia datasets. As these incorrect mappings are spread in a large number of mappings, it is not feasible to inspect all such mappings manually to ensure their correctness. Thus, the goal of this work is to propose a data-driven method to detect incorrect mappings automatically by analyzing the information from both instance data as well as ontological axioms. We propose a machine learning based approach to building a predictive model which can detect incorrect mappings. We have evaluated different supervised classification algorithms for this task and our best model achieves 93% accuracy. These results help us to detect incorrect mappings and achieve a high-quality DBpedia.