Raúl Palma
Technical University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Raúl Palma.
international conference on move to meaningful internet systems | 2005
Jens Hartmann; Raúl Palma; York Sure; M. Carmen Suárez-Figueroa; Peter Haase; Asunción Gómez-Pérez; Rudi Studer
Ontologies have seen quite an enormous development and application in many domains within the last years, especially in the context of the next web generation, the Semantic Web. Besides the work of countless researchers across the world, industry starts developing ontologies to support their daily operative business. Currently, most ontologies exist in pure form without any additional information, e.g. authorship information, such as provided by Dublin Core for text documents. This burden makes it difficult for academia and industry e.g. to identify, find and apply – basically meaning to reuse – ontologies effectively and efficiently. Our contribution consists of (i) a proposal for a metadata standard, so called Ontology Metadata Vocabulary (OMV) which is based on discussions in the EU IST thematic network of excellence Knowledge Web and (ii) two complementary reference implementations which show the benefit of such a standard in decentralized and centralized scenarios, i.e. the Oyster P2P system and the Onthology metadata portal.
Journal of Web Semantics | 2015
Khalid Belhajjame; Jun Zhao; Daniel Garijo; Matthew Gamble; Kristina M. Hettne; Raúl Palma; Eleni Mina; Oscar Corcho; José Manuél Gómez-Pérez; Sean Bechhofer; Graham Klyne; Carole A. Goble
Scientific workflows are a popular mechanism for specifying and automating data-driven in silico experiments. A significant aspect of their value lies in their potential to be reused. Once shared, workflows become useful building blocks that can be combined or modified for developing new experiments. However, previous studies have shown that storing workflow specifications alone is not sufficient to ensure that they can be successfully reused, without being able to understand what the workflows aim to achieve or to re-enact them. To gain an understanding of the workflow, and how it may be used and repurposed for their needs, scientists require access to additional resources such as annotations describing the workflow, datasets used and produced by the workflow, and provenance traces recording workflow executions.In this article, we present a novel approach to the preservation of scientific workflows through the application of research objects-aggregations of data and metadata that enrich the workflow specifications. Our approach is realised as a suite of ontologies that support the creation of workflow-centric research objects. Their design was guided by requirements elicited from previous empirical analyses of workflow decay and repair. The ontologies developed make use of and extend existing well known ontologies, namely the Object Reuse and Exchange (ORE) vocabulary, the Annotation Ontology (AO) and the W3C PROV ontology (PROVO). We illustrate the application of the ontologies for building Workflow Research Objects with a case-study that investigates Huntingtons disease, performed in collaboration with a team from the Leiden University Medial Centre (HG-LUMC). Finally we present a number of tools developed for creating and managing workflow-centric research objects.
international world wide web conferences | 2006
Raúl Palma; Peter Haase; Asunción Gómez-Pérez
In this paper, we present Oyster, a Peer-to-Peer system for exchanging ontology metadata among communities in the Semantic Web. Oyster exploits semantic web techniques in data representation, query formulation and query result presentation to provide an online solution for sharing ontologies, thus assisting researchers in re-using existing ontologies.
Journal of Web Semantics | 2011
Raúl Palma; Oscar Corcho; Asunción Gómez-Pérez; Peter Haase
This paper describes our methodological and technological approach for collaborative ontology development in inter-organizational settings. It is based on the formalization of the collaborative ontology development process by means of an explicit editorial workflow, which coordinates proposals for changes among ontology editors in a flexible manner. This approach is supported by new models, methods and strategies for ontology change management in distributed environments: we propose a new form of ontology change representation, organized in layers so as to provide as much independence as possible from the underlying ontology languages, together with methods and strategies for their manipulation, version management, capture, storage and maintenance, some of which are based on existing proposals in the state of the art. Moreover, we propose a set of change propagation strategies that allow keeping distributed copies of the same ontology synchronized. Finally, we illustrate and evaluate our approach with a test case in the fishery domain from the United Nations Food and Agriculture Organisation (FAO). The preliminary results obtained from our evaluation suggest positive indication on the practical value and usability of the work here presented.
Journal of Web Semantics | 2015
C. Maria Keet; Agnieszka Ławrynowicz; Claudia d’Amato; Alexandros Kalousis; Phong Nguyen; Raúl Palma; Robert Stevens; Melanie Hilario
The Data Mining OPtimization Ontology (DMOP) has been developed to support informed decision-making at various choice points of the data mining process. The ontology can be used by data miners and deployed in ontology-driven information systems. The primary purpose for which DMOP has been developed is the automation of algorithm and model selection through semantic meta-mining that makes use of an ontology-based meta-analysis of complete data mining processes in view of extracting patterns associated with mining performance. To this end, DMOP contains detailed descriptions of data mining tasks (e.g., learning, feature selection), data, algorithms, hypotheses such as mined models or patterns, and workflows. A development methodology was used for DMOP, including items such as competency questions and foundational ontology reuse. Several non-trivial modeling problems were encountered and due to the complexity of the data mining details, the ontology requires the use of the OWL 2 DL profile. DMOP was successfully evaluated for semantic meta-mining and used in constructing the Intelligent Discovery Assistant, deployed at the popular data mining environment RapidMiner.
asian semantic web conference | 2008
Raúl Palma; Peter Haase; Oscar Corcho; Asunción Gómez-Pérez; Qiu Ji
The widespread use of ontologies in the last years has raised new challenges for their development and maintenance. Ontology development has transformed from a process normally performed by one ontology engineer into a process performed collaboratively by a team of ontology engineers, who may be geographically distributed and play different roles. For example, editors may propose changes, while authoritative users approve or reject them following a well defined process. This process, however, has only been partially addressed by existing ontology development methods, methodologies, and tool support. Furthermore, in a distributed environment where ontology editors may be working on local copies of the same ontology, strategies should be in place to ensure that changes in one copy are reflected in all of them. In this paper, we propose a workflow-based model for the collaborative development of ontologies in distributed environments and describe the components required to support them. We illustrate our model with a test case in the fishery domain from the United Nations Food and Agriculture Organisation (FAO).
Semantic Web Evaluation Challenge | 2014
Raúl Palma; Piotr Hołubowicz; Oscar Corcho; José Manuél Gómez-Pérez; Cezary Mazurek
Research Objects (ROs) are semantic aggregations of related scientific resources, their annotations and research context. They are meant to help scientists to refer to all the materials supporting their investigation. ROHub is a digital library system for ROs that supports their storage, lifecycle management and preservation. It provides a Web interface and a set of RESTful APIs enabling the sharing of scientific findings via ROs. Additionally, ROHub includes different features that help scientists throughout the research lifecycle to create and maintain high-quality ROs that can be interpreted and reproduced in the future. For instance, scientists can assess the conformance of an RO to a set of predefined requirements and create RO Snapshots, at any moment, to share, cite or submit to review the current state of research outcomes. ROHub can also generate nested ROs for workflow runs, exposing their content and annotations, and includes monitoring features that generate notifications when changes are detected.
european semantic web conference | 2006
Jens Hartmann; Elena Paslaru Bontas; Raúl Palma; Asunción Gómez-Pérez
Efficient knowledge sharing and reuse—a pre-requisite for the realization of the Semantic Web vision—is currently impeded by the lack of standards for documenting and annotating ontologies with metadata information. We argue that the availability of metadata is a fundamental dimension of ontology reusability. Metadata information provides a basis for ontology developers to evaluate and adapt existing Semantic Web ontologies in new application settings, and fosters the development of support tools such as ontology repositories. However, in order for the metadata information to represent real added value to ontology users, it is equally important to achieve a common agreement on the terms used to describe ontologies, and to provide an appropriate technology infrastructure in form of tools being able to create, manage and distribute this information. In this paper we present DEMO, a framework for the development and deployment of ontology metadata. Besides OMV, the proposed core vocabulary for ontology metadata, the framework comprises an inventory of methods to collaboratively extend OMV in accordance to the requirements of an emerging community of industrial and academia users, and tools for metadata management.
owl: experiences and directions | 2015
Raúl Palma; Tomas Reznik; Miguel Esbrí; Karel Charvat; Cezary Mazurek
FOODIE project aims at building an open and interoperable agricultural specialized platform on the cloud for the management, discovery and large-scale integration of data relevant for farming production. In particular, the integration focuses on existing open datasets as well as their publication in Linked data format in order to maximize their reusability and enable the exploitation of the extra knowledge derived from the generated links. Based on such data, for instance, FOODIE platform aims at providing high-value applications and services supporting the planning and decision-making processes of different stakeholders related to the agricultural domain. The keystone for data integration is FOODIE data model, which has been defined by reusing and extending current standards and best practices, including data specifications from the INSPIRE directive which are in turn based on the ISO/OGC standards for geographical information. However, as these data specifications are available as XML documents, the first step to publish Linked Data required transforming or lifting FOODIE data model into semantic format. In this paper, we describe this process, which was conducted semi-automatically by reusing existing tools, and adhering to the mapping rules for transforming geographic information UML models to OWL ontologies defined by the ISO 19150-2 standard. We describe the challenges associated to this transformation, and finally, we describe the generated ontology, providing an INSPIRE-based vocabulary for the publication of Agricultural Linked Data.
Proceedings of the 1st International Workshop on Digital Preservation of Research Methods and Artefacts | 2013
Raúl Palma; Oscar Corcho; Piotr Hotubowicz; Sara Pérez; Kevin R. Page; Cezary Mazurek
New digital artifacts are emerging in data-intensive science. For example, scientific workflows are executable descriptions of scientific procedures that define the sequence of computational steps in an automated data analysis, supporting reproducible research and the sharing and replication of best-practice and know-how through reuse. Workflows are specified at design time and interpreted through their execution in a variety of situations, environments, and domains. Hence it is essential to preserve both their static and dynamic aspects, along with the research context in which they are used. To achieve this, we propose the use of multidimensional digital objects (Research Objects) that aggregate the resources used and/or produced in scientific investigations, including workflow models, provenance of their executions, and links to the relevant associated resources, along with the provision of technological support for their preservation and efficient retrieval and reuse. In this direction, we specified a software architecture for the design and implementation of a Research Object preservation system, and realized this architecture with a set of services and clients, drawing together practices in digital libraries, preservation systems, workflow management, social networking and Semantic Web technologies. In this paper, we describe the backbone system of this realization, a digital library system built on top of dLibra.