Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nikos Houssos is active.

Publication


Featured researches published by Nikos Houssos.


International Journal of Metadata, Semantics and Ontologies | 2014

Research information management: the CERIF approach

Keith G. Jeffery; Nikos Houssos; Brigitte Jörg; Anne Asserson

In the context of the wide research environment, we introduce the CERIF Common European Research Information Format data model which a has a richer structure than the usual metadata standards used in research information; b separates base entities from link entities thus providing flexibility in expressing role-based temporal relationships; c defines a distinct semantic layer so that relationship roles in link entities and controlled value lists in base entities are separately managed and multiple vocabularies can be used and related to each other; d can generate the common metadata formats used in research information. CERIF is used widely and is an EU recommendation to member states. At the request of the European Commission, CERIF is maintained, developed and promoted by euroCRIS.


Procedia Computer Science | 2014

OpenAIRE Guidelines for CRIS Managers: Supporting Interoperability of Open Research Information through Established Standards☆

Nikos Houssos; Brigitte Jörg; Jan Dvořák; Pedro Príncipe; Eloy Rodrigues; Paolo Manghi; Mikael Karstensen Elbæk

OpenAIRE is the European infrastructure enabling researchers to comply with the European Union requirements for Open Access to research results. OpenAIRE collects metadata from data sources across Europe and beyond and defines interoperability guidelines to assist providers in exposing their information in a way that is compatible with OpenAIRE. This contribution focuses on a specific type of data source, CRIS systems, and the respective OpenAIRE guidelines, based on CERIF XML. A range of issues, spanning different aspects of information representation and exchange, needed to be addressed by the guidelines in order to define a complete solution for interoperability.


metadata and semantics research | 2012

The Data Model of the OpenAIRE Scientific Communication e-Infrastructure

Paolo Manghi; Nikos Houssos; Marko Mikulicic; Brigitte Jörg

The OpenAIREplus project aims to further develop and operate the OpenAIRE e-infrastructure, in order to provide a central entry point to Open Access and non-Open Access publications and datasets funded by the European Commission and National agencies. The infrastructure provides the services to populate, curate, and enrich an Information Space by collecting metadata descriptions relative to organizations, data sources, projects, funding programmes, persons, publications, and datasets. Stakeholders in the research process and scientific communication, such as researchers, funding agencies, organizations involved in projects, project coordinators, can here find the information to improve their research and statistics to measure the impact of Open Access and funding schemes over research. In this paper, we introduce the functional requirements to be satisfied and describe the OpenAIREplus data model entities and relationships required to represent information capable of meeting them.


Procedia Computer Science | 2014

From Open Data to Data-Intensive Science through CERIF

Keith G. Jeffery; Anne Asserson; Nikos Houssos; Valérie Brasse; Brigitte Jörg

OGD (Open Government Data) is provided from government departments for transparency and to stimulate a market in ICT services for industry and citizens. Research datasets from publicly funded research commonly are associated with the open scholarly publications movement. However, the former world commonly is derived from the latter with generalisation and summarisation. There is advantage in a user of OGD being able to ‘drill down’ to the underlying research datasets. OGD encourages cross-domain research because the summarized data from different domains is more easily relatable. Bridging across the two worlds requires rich metadata; CERIF (Common European research Information Format) has proved itself to be ideally suited to this requirement. Utilising the research datasets is data-intensive science, a component of e-Research. Data-intensive science also requires access to an e-infrastructure. Virtualisation of this e-infrastructure optimizes this.


Procedia Computer Science | 2014

OpenAIRE Guidelines: supporting interoperability for Literature Repositories, Data Archives and CRIS

Pedro Príncipe; Najla Rettberg; Eloy Rodrigues; Mikael Karstensen Elbæk; Jochen Schirrwagen; Nikos Houssos; Lars Holm Nielsen; Brigitte Jörg

OpenAIRE – Open Access Infrastructure for Research in Europe – is moving from a publication infrastructure to a more comprehensive infrastructure that covers all types of scientific output. To put this into practice an integrated suite of guidelines were developed with specific requirements supporting the goal of OpenAIRE and the European Commission. This poster outlines the OpenAIRE Guidelines, highlighting the set of guidelines for Literature Repository Managers, for Data Archive Managers and for CRIS Managers.


Archive | 2014

Theory and Practice of Digital Libraries -- TPDL 2013 Selected Workshops

Łukasz Bolikowski; Vittore Casarosa; Paula Goodale; Nikos Houssos; Paolo Manghi; Jochen Schirrwagen

This article describes a case study of a small research group collecting and managing data from a pair of long-running experimental campaigns, detailing the data management and publication processes in place at the time of the experiments. It highlights the reasons why publications became disconnected from their underlying data in the past, and identifies the new processes and principles which aim to address these issues.


Procedia Computer Science | 2014

Providing an application-specific interface over a CERIF back-end: challenges and solutions

Dragan Ivanović; Nikos Houssos

This paper presents a case of presenting information modelled in CERIF through an application-specific programming interface which does not require CERIF expertise by the developers. The CERIF data model is semantically rich and can be used for detailed description of entities of scientific / research activity. A sophisticated application interface is required to exploit the full range of CERIF capabilities. On the other hand, it is obvious that data about scientific-research entities are utilised by users and software developers with various level of familiarity with the CERIF model. Because of the diversity of CRIS systems’ users’ knowledge of CERIF, in certain cases it is useful to create an additional application interface with elements of some simple model which can be easily understood and used by users with low level or without any knowledge of CERIF. The article presents the design and implementation of a wrapper that enables bidirectional conversion of entered data using a simple model application interface to a CERIF back-end and the use of the wrapper in the ENGAGE project as part of an open infrastructure for Public Sector Information.


international conference on management of data | 2014

Report on the First Workshop on Linking and Contextualizing Publications and Datasets

Paolo Manghi; Lukasz Bolikowski; Nikos Houssos; Jochen Schirrwagen

Contemporary scholarly communication is undergoing a paradigm shift, which in some ways echoes the one from the start of the Digital Era, when publications moved to a digital form. There are multiple reasons for this change, and three prominent ones are: (i) emergence of data-intensive science (Jim Gray’s Fourth Paradigm), (ii) evolving reading patterns in modern science, and (iii) increasing heterogeneity of research communication practices (and technologies). Motivated by e-Science methodologies and dataintensive science, contemporary scientists are increasingly embracing new data-centric ways of conceptualizing, organizing and carrying out their research activities. Such paradigm shift strongly affects the way scholarly communication is conducted, promoting datasets as first class citizen of the scientific dissemination. Scientific communities are eagerly investigating and devising solutions for scientists to publish their raw and secondary datasets – e.g. sensor data, tables, charts, questionnaires – to enable: (i) discovery and re-use of datasets and (ii) rewarding the scientists who produced the datasets after often meticulous and time-consuming e↵orts. Data publishing is still not a reality in many communities, while for others it has already solidified into procedures and policies. Due to the ability to have immediate Web access to all published material, be them publications or datasets, scientists are today faced with a daily wave of new potentially relevant research results. Several solutions have been devised to go beyond the simple digital article and facilitate the identification of relevant and quality material. Approaches aim at enriching publications with semantic tags, quality evaluations, feedbacks, pointers to authority files (for example persistent identifiers of authors, a liation, and funding) or links to other research material. Such trends find their motivations not only from the need of scientists to share a richer perspective of research outcome, but also from traditional and novel needs of research organisations and funding agencies to: (i) measure research impact in order to assess and reward their initiatives, e.g. research outcome must be linked to a liations, authorships, and grants, and (ii) guarantee the results of public research is made available as interlinked and contextualized Open Access material, e.g. research datasets are interlinked to related publications and made available via online data repositories and publication repositories. The most prominent example of such requirements is provided by the European Commission with the Open Access mandates for publications and data in Horizon2020. Finally, researchers rely on di↵erent technologies and systems to deposit and preserve their research outcome and their contextual information. Datasets and publications are kept into data centres and institutional and thematic repositories together with descriptive metadata. Contextual information is scattered into other systems, for example CRIS systems for funding schemes and a liation, national and international initiatives and registries, such as ORCID and VIAF for authors and notable people in general. The construction of Modern Scholarly Communication Systems capable of collecting and assembling such information in a meaningful way has opened up several research challenges in the fields of Digital Library, e-Science, and e-Research. Solving the above challenges would foster multidisciplinarity, generate novel research opportunities, and endorse quality research. To this aim, sectors of scholarly communication and digital libraries are investigating solutions for “interlinking” and “contextualizing” datasets and scientific publications. Such solutions span from publishing methodologies, processes, policies, to technical aspects involving


CRIS | 2012

A multi-level metadata approach for a Public Sector Information data infrastructure

Nikos Houssos; Brigitte Jörg; Brian Matthews


DC-2013, Lisbon, Portugal | 2014

A 3-Layer Model for Metadata

Keith G. Jeffery; Anne Asserson; Nikos Houssos; Brigitte Jörg

Collaboration


Dive into the Nikos Houssos's collaboration.

Top Co-Authors

Avatar

Paolo Manghi

Istituto di Scienza e Tecnologie dell'Informazione

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keith G. Jeffery

Rutherford Appleton Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mikael Karstensen Elbæk

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vittore Casarosa

Istituto di Scienza e Tecnologie dell'Informazione

View shared research outputs
Top Co-Authors

Avatar

Najla Rettberg

University of Göttingen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge