A Unified Nanopublication Model for Effective and User-Friendly Access to the Elements of Scientific Publishing
AA Unified Nanopublication Model for Effectiveand User-Friendly Access to the Elements ofScientific Publishing
Cristina-Iulia Bucur − − − , Tobias Kuhn − − − ,and Davide Ceolin − − − Vrije Universiteit Amsterdam, Amsterdam, The Netherlands { c.i.bucur,t.kuhn } @vu.nl Centrum Wiskunde & Informatica, Amsterdam, The Netherlands [email protected]
Abstract.
Scientific publishing is the means by which we communicateand share scientific knowledge, but this process currently often lackstransparency and machine-interpretable representations. Scientific arti-cles are published in long coarse-grained text with complicated struc-tures, and they are optimized for human readers and not for automatedmeans of organization and access. Peer reviewing is the main methodof quality assessment, but these peer reviews are nowadays rarely pub-lished and their own complicated structure and linking to the respectivearticles is not accessible. In order to address these problems and to bet-ter align scientific publishing with the principles of the Web and LinkedData, we propose here an approach to use nanopublications as a unify-ing model to represent in a semantic way the elements of publications,their assessments, as well as the involved processes, actors, and prove-nance in general. To evaluate our approach, we present a dataset of 627nanopublications representing an interlinked network of the elements ofarticles (such as individual paragraphs) and their reviews (such as in-dividual review comments). Focusing on the specific scenario of editorsperforming a meta-review, we introduce seven competency questions andshow how they can be executed as SPARQL queries. We then present aprototype of a user interface for that scenario that shows different viewson the set of review comments provided for a given manuscript, and weshow in a user study that editors find the interface useful to answer theircompetency questions. In summary, we demonstrate that a unified andsemantic publication model based on nanopublications can make scien-tific communication more effective and user-friendly.
Scientific publishing is about how we disseminate, share and assess research.Despite the fact that technology has changed how we perform and disseminateresearch, there is much more potential for scientific publishing to become a moretransparent and more efficient process, and to improve on the age-old paradigms a r X i v : . [ c s . D L ] J un f journals, articles, and peer reviews [3, 27]. With scientific publishing oftenstuck to formats optimized for print such as PDF, we are not using the advancesthat are available to us with technologies around the Semantic Web and LinkedData [6, 34].In this work we aim to address some of these problems by looking at thescientific publishing process at a more finer-grained level and recording formalsemantics for the different elements. Instead of treating big bulks of text as such,we propose to represent them as small snippets — e.g. paragraphs — that haveformal semantics attached and can be treated as independent publication units.They can link to other such units and therefore form a larger entity — suchas a full paper or review — by forming a complex network of links. With thatapproach, we can ensure that provenance of each snippet of information canbe accurately tracked together with its creation time and author, and thereforeallow for more flexible and more efficient publishing than the current paradigm.A process like peer-reviewing can then be broken down into small snippets andthereby take the specialization of reviewers and the detailed context of theirreview comments into account, and these review comments can formally andprecisely link to exactly the parts of the paper they address. Each article, para-graph and each review comment thereby forms a single node in a network andis each identified by a dereferenceable URI.We demonstrate here how we can implement such a system with the existingconcept and technology of nanopublications, a Linked Data format for storingsmall assertions together with their provenance and meta-data. We then showhow this approach allows us to build powerful and user-friendly interfaces toaggregate and access larger numbers of such small communication elements,and we demonstrate this on the concrete case of a system for editors to assessmanuscripts based on a set of review comments.In this research we aim to answer the following research questions:1. Can we use nanopublications as a unifying data model to represent the struc-ture and links of manuscripts and their assessments in a precise, transparent,and provenance-aware manner?2. Is a fine-grained semantic publishing and reviewing model able to provideus with answers to common competency questions that journal editors facein their work as meta-reviewers?3. Can we design an intuitive and effective interface based on a fine-grainedsemantic publishing and reviewing model that supports journal editors injudging the quality of manuscripts based on the received reviews?We address these research questions with the following contributions: – A general scheme of how nanopublications can be used to represent andpublish different kinds of interlinked publication elements – A dataset of 627 nanopublications, implementing this scheme to representexemplary articles and their open reviews – A set of seven competency questions for the scenario of journal editors meta-reviewing a manuscript, together with SPARQL representations of thesequestions 2
A prototype of a fine-grained semantic analysis interface for the above sce-nario and dataset, powered by nanopublications – Results from a user study on the perceived importance of the above com-petency questions and the perceived usefulness of the above prototype foranswering themThe rest of this article is structured as follows. In Section 2 we describethe current state of the art in the field of scientific publishing and the review-ing process in particular. In Section 3 we describe our approach with regard toperforming the reviewing process in a fine-grained manner based on nanopub-lications. In Section 4.1 we describe in detail how we performed the evaluationof our approach, while we report and discuss the results of this evaluation inSection 4.2. Future work and conclusion of the present research are outlined inSection 5.
Before we move on to describe our approach, we give here the relevant back-ground on scientific publishing, semantic papers, and the specific concept andtechnology of nanopublications.Scientific publishing is at the core of scientific research, which has moved inthe last decades from print to online publishing [35]. It is, however, still mostlyfollowing the paradigm from the print age, with narrative articles being publishedin journals and assessed by peer reviewers, only the printed volumes havingbeen replaced by PDF files that are made accessible via search engines [21].Considering the ever increasing number of articles and the increasing complexityof research methods, this old paradigm of publishing seems to have reached itslimit, and scientists are struggling to stay up to date in their specific fields [20].Slowly but steadily, these old paradigms are shifting with open access publishing,semantically enriched content, data publication, and machine-readable metadatagaining momentum and importance [32, 36]. Opposition is also growing againstthe use of impact factor [8, 9, 23] or h-index as metrics for assessment of theparticipants in this publication process, and it has been shown that these metricscan be tampered with easily [1, 7, 28, 30].Advances in Semantic Web technologies like RDF, OWL, and SPARQL haveallowed for the semantic enhancement of scholarly journal articles when pub-lishing data and metadata [31, 33]. As such, semantic publishing was proposedas a way to make scholarly publications discoverable, interactive, open andreusable for both, humans and machines, and to release them as Open LinkedData [12, 22, 29]. In order to extract formal semantics from already publishedpapers in an automated manner, sophisticated methods such as the compo-sitional and iterative semantic enhancement method (CSIE) [24], conceptualframeworks for modelling contexts associated with sentences in research arti-cles [2] and semantic lenses were developed [11]. Furthermore, HTML formatslike RASH have been proposed to represent scientific papers that include seman-tic annotations [26], and vocabularies like the SPAR (Semantic Publishing and3eferencing) suite of ontologies have been introduced to semantically model allaspects relevant to scientific publishing [25]. These approaches mostly work onalready published articles, but it has been argued that scientific findings andtheir contexts should be expressed in semantic representations from the start bythe researchers themselves, in what has been named genuine semantic publish-ing [17].In our previous work [5], we applied the general principles of the Web andthe Semantic Web to promote this kind of genuine semantic publishing [17] byapplying it to peer reviews. We proposed a semantic model for reviewing at afiner-grained level called Linkflows and argued that Linked Data principles likedereferenceable URIs using open standards like RDF can be used for publishingsmall snippets of information, such as an individual review comment, instead ofbig chunks of text, such as an entire review. These small snippets of text canbe represented as nodes in a network and can be linked with one another withsemantically-annotated connections, thus forming distributed and semanticallyannotated networks of contributions. The individual review comments are se-mantically modeled with respect to what part of the paper they target, whetherthey are about syntax or content, whether they raise a positive or negative point,and whether they are a suggestion or compulsory, and what their impact on thequality of the paper is. We showed on this model that it is indeed beneficial ifwe capture these semantics at the source (i.e. the peer reviewer in this case).Nanopublications [10] are a specific concept and technology based on LinkedData to publish scientific results and their metadata in small publication units.Each nanopublication has an assertion that contains the main content (suchas a scientific finding), and comes with provenance about that assertion (e.g.what study was conducted to derive at the assertion; or which documents it wasextracted from) and with publication information about the nanopublicationas a whole (e.g. by whom and when it was created). All these three parts arerepresented in RDF and thereby machine-interpretable.It has been shown how nanopublications can also be used for other kindsof assertions, including meta-statements about other nanopublications [14], andin order to make nanopublications verifiable and immutable, trusty URIs [16]can be used as identifiers, which include cryptographic hash values that arecalculated on the nanopublication’s content. A decentralized server network hasbeen established based on this, through which anybody can reliably publish andretrieve nanopublications [18]. In order to group nanopublications into largercollections and versions thereof, index nanopublications have been introduced[19]. With these technologies, small interconnected Linked Data snippets can bepublished in a reliable, decentralized, provenance-aware manner.
Our general approach is to investigate the benefits of using nanopublications as aunifying publishing unit to establish a new paradigm of scientific communicationthat is better aligned with the principles of the Web and Linked Data. We4 utomating the Semantic Publishing - Applying a format-independent and language-agnostic approach for the compositional and iterative semantic enhancement of scholarly articles
Title
The title reads funny, the author should consider making it closer to the content of the paper.
Review Comment 1
I think the subtitle is too dense for a general CS audience (or even for a ‘data science’ audience). It may make sense to someone that’s into formal methods. Maybe.
Review Comment 2
Automating Semantic Publishing
Updated title
I’ve modified the paper accordingly.
Answer authors
I’ve removed the subtitle.
Answer authors refersTo refersToisResponseTo refersTo isResponseTo negative content 3 compulsoryneutral content 2 suggestionparagraph paragraph isUpdateOf
Fig. 1.
An example of a nanopublication-style communication interaction. investigate how such an approach could allow us to communicate in a moreefficient, more precise, and more user-friendly manner.
Our unifying semantic model based on nanopublications uses a number of exist-ing ontologies like SPAR, PROV-O, FAIR* reviews, the Web Annotation DataModel, and our own Linkflows model [5] to break the big bulks of article andreview texts into smaller text snippets. An example of a nanopublication-stylecommunication interaction during the reviewing process is illustrated in Figure1, where the title of a paper is addressed by several review comments that comewith semantic classes (e.g. suggestion ), which are themselves referred to by theauthors’ answers that link them to the updated version. Each node in this net-work is represented as a separate nanopublication and all the attributes andrelations are formally represented as Linked Data.As we can see in Figure 1, the properties refersTo, isResponseTo, isUpda-teOf play the key role of linking different nodes in this network. refersTo is aproperty that links a review comment to the text snippet in the article it refersto. isResponseTo links the answer of the authors to the review comments ofthe reviewer and also to new versions of the text snippets that these reviewcomments triggered. isUpdateOf links a version of the text snippet to another.In our approach, snippets of scientific articles (mostly corresponding to para-graphs) as well as their review comments (corresponding to individual reviewcomments) are semantically represented as nanopublications [10], and therebythey each form a node in the network described above. A complete example ofsuch a nanopublication containing a review comment is shown in Figure 2.Each of the three main parts of a nanopublication — assertion, provenance,and publication info — is represented as an RDF graph. In the example ofFigure 2, the assertion graph describes a review comment using the classes andproperties of the Linkflows model . It raises a negative point with an importance https://github.com/LaraHack/linkflows model ig. 2. Example nanopublication of a review comment. of 2 out of 5, and is marked as a suggestion for the authors. Furthermore, wesee that this review comment refers to an external element, with a URI endingin , as the target of this comment. This external element happensto be a paragraph of an article described in another nanopublication, which wecan find out by following that trusty URI link.Moreover, the nanopublication contains information regarding the creator ofthe assertion and the creator of the nanopublication that contains this assertion.These pieces of information can be found in the provenance and publicationinfo graphs. As illustrated in Figure 2, the author of the review comment isindicated by his ORCID identifier and the source of the original source of thereview comment is indicated by the the URL pointing to a link of the SemanticWeb Journal. From the publication info graph, we can see who created the wholenanopublication together with the date and time of its creation.With nanopublications, the provenance and immutability of these small con-tributions can be guaranteed by the usage of Trusty URIs [15]. As such, forevery nanopublication, in order for it to be published, a unique immutable URIis generated to refer to the node that holds the nanopublication. Any change ofthis nanopublication results in the generation of a new nanopublication, thus ofa new node that is linked to the previous one. Such nanopublications can thenbe published in the existing decentralized nanopublication network [18].
In the scientific publishing context, editors of journals play a key role, beingan important link between content providers for journals (authors), the peoplewho assess the quality of the content (peer reviewers) and the consumers of suchcontent (the readers). While the peer reviewers are the ones that can recommendthe acceptance or rejection of an article, it is up to the editors to make the final6ecision. We will look here into how our approach can benefit the specific scenarioof editors assessing a manuscript based on given reviews and having to write ameta-review.Performing such a meta-review is not a trivial task. As classical reviewsare mainly comprised of a large bulks of text in natural language, it is hardto provide a tool with quantitative information about the reviews and theircollective implications on the manuscript. As such, an editor needs to spend alot of time just to read these reviews fully to even get an overview of the natureand range of the raised issues.In order to apply our approach to this chosen use case, we first define a setof competency questions (CQs), which are natural language questions that arecreated with the objective to assess the practicality and coverage of an ontologyor model [4]. After consulting with publishing experts at IOS Press and theNetherlands Institute of Sound and Vision , we came up with the followingseven quantifiable competency questions from an editor’s point of view duringmeta-reviewing: – CQ1 : What is the number of positive comments and the number of negativecomments per reviewer? – CQ2 : What is the number of positive comments and the number of negativecomments per section of the article? – CQ3 : What is the distribution of the review comments with respect to whetherthey address the content or the presentation (syntax and style) of the article? – CQ4 : What is the nature of the review comments with respect to whetherthey refer to a specific paragraph or a larger structure such as a section orthe whole article? – CQ5 : What are the critical points that were raised by the reviewers in thesense of negative comments with a high impact on the quality of the paper? – CQ6 : How many points were raised that need to be addressed by the authors,as an estimate for the amount of work needed for a revision? – CQ7 : How do the review comments cover the different sections and para-graphs of the paper?
In order to evaluate our approach on the given use case, we need some datafirst. For this, we selected three papers that were submitted to a journal thathas open reviews (Semantic Web Journal). Therefore, we could also access thefull text of the reviews these papers received. We then manually modelled allthe article, paragraphs, review comments, their interrelations, as well as theirlarger structures — in the form of sections and full articles and reviews — asindividual nanopublications according to our approach. All these elements werethereby semantically modeled, and we could reuse part of our earlier dataset of . In order to apply and evaluate our approach on the chosen use case, we devel-oped a prototype of an editor interface that accesses the nanopublications inthe dataset presented above to provide a detailed and user-friendly interface tosupport editors in their meta-reviewing tasks.This prototype comes with two views: one where the review comments areshown per reviewer in a bar chart broken down into the different dimensions andclasses, as shown in Figure 3 and another view that focuses on the distribution ofthe review comments to the different sections of the article, as shown in Figure 4.The interface for an exemplary article with three reviews can be accessed online .The shown content is aggregated from nanopublications stored in a triple storeand displayed by showing color codes for the different Linkflows classes for theindividual review comments.In the reviewer-oriented view (Figure 3), we can see in a more quantitativeway the set of review comments and their types represented in different colors,where the checkboxes in the legend can be used to filter the review commentsof the given category. To see the content of the review comments that are in acertain dimension, it is sufficient to just click on a bar in the chart.The section-oriented view (Figure 4), aggregates all the finer-grained dimen-sions of the review comments at the level of sections in an article. Again, clickingon one cell in the table, thus selecting one specific dimension of the review com-ments, will show the content of those review comments underneath the table inthe interface.When data from the triple store is required, the server (implemented inNodeJS with the Express web application framework ) sends a request to theVirtuoso triple store where the nanopublications are stored. This request exe-cutes a SPARQL query on the stored nanopublications and returns the result https://github.com/LaraHack/linkflows model implementation http://linkflows.nanopubs.lod.labs.vu.nl https://nodejs.org, https://expressjs.com/ ig. 3. The reviewer-oriented view for the editor study. to the server that, in turn, passes it further to the client, in the web browser,where the results are postprocessed and visualized. The code for the prototypecan be found online . Here we present the evaluation of our approach in the form of a descriptive anal-ysis, the analysis of the SPARQL implementations of our competency questions,and a user study with editors on our prototype interface.
First, we run a small descriptive analysis on the nanopublication dataset that wecreated. We can quantify the size and interrelation of the represented manuscriptsand reviews in new ways, including the number of nanopublications, triples,paragraphs, review comments, and links between them. We also tested how longit takes to download all 627 nanopublications from the server network, using nanopub-java [13] as a command-line tool and giving it only the URI of theindex nanopublication. This small download test was performed on a personalcomputer via a normal home network. For this, we retrieved them all via thelibrary’s get command and measured the time. We performed this 50 times, infive batches of 10 executions. Interface: https://github.com/LaraHack/linkflows interfacesBackend application: https://github.com/LaraHack/linkflows model appData: https://github.com/LaraHack/linkflows model implementation ig. 4. The section-oriented view for the editor study.
Next, we used our dataset to see if we are able to answer the seven competencyquestions that we defined above, in order to help editors in their meta-reviewingtask. With this, we want to find out whether the combination of ontologiesand vocabularies we used in our approach is sufficient to cover them, and thenwhether we can use the SPARQL query language to operationalize them andmake them automatically executable on our nanopublication data.Finally, we perform a user experiment involving editors to find out whetherthey indeed consider our competency questions important, and how useful theyfind our interface for getting an answer to these questions. For this study, wecreated a form that had two parts corresponding to the two parts of the study. Wechose an article from our dataset that had a large number of review comments.For the first part, we asked for the importance of the competency questions usinga Likert scale (from 1 to 5). For the second part, we provided static screenshotsof our tool (the reviewer-oriented or the section-oriented view, depending on thequestion) together with a link to the live demo and asked about how useful theparticipants would find such a tool to answer the given competency question.The answers were on the same kind of a Likert scale from 1 to 5. We sent thisquestionnaire (details online ) to a total of 401 editors of journals that supportopen reviews, specifically Data Science, the Semantic Web Journal and PeerJComputer Science. We can now turn to the results of these three parts of our evaluation. Detailsabout the dataset and how it was generated and further queries and results canbe found online . https://github.com/LaraHack/linkflows editor survey/ https://github.com/LaraHack/linkflows model implementation able 1. Descriptive statistics datasetpart of article numberarticles 3sections 89paragraphs 279figures 11tables 10formula 8footnote 2review comments 213
Table 2.
Statistics nanopublications.number averageNanopublications: 627Head triples: 2508 4.00Assertion triples: 5420 8.64Provenance triples: 1254 2.00Publication info triples: 1255 2.00Total triples: 10 437 16.65
Descriptive Analysis.
Our representation of the three papers of our datasettogether with their reviews leads to a total of 10 437 triples in 627 nanopublica-tions, 279 text snippets and 213 review comments (85 for article 1, 59 for article2 and 69 for article 3). Each of the three articles had three reviews: first article -17, 18 and 50 review comments provided by the three reviewers, second article -16, 21, 22 review comments each and third article - 11, 42, 16 review commentseach.In Table 1 some general statistics of the dataset are presented, while Table 2shows general statistics about the nanopublications corresponding to the threearticles and their reviews. Overall, this demonstrates the working of our approachof representing the elements of scientific communication in a fine-grained seman-tic manner. Of course, more complex analyses are possible, including networkanalyses of the complex interaction structure, and the queries for the competencyquestions that we defined above, to which we come back below.Our small test on the performance of retrieving all nanopublications fromthe decentralized nanopublication network showed an average download time of11.66 seconds overall (with a minimum of 8.39 and a maximum of 13.34 seconds).This operation retrieves each of the 627 nanopublications separately and thencombines them in a single output file. The time per nanopoublication is therebyjust 18.6 milliseconds, which is achieved by executing the request in parallel toseveral servers in the network at the same time.
Competency Question Execution.
In order to answer the competency ques-tions in Section 3.2, we managed to implement each of them as a concreteSPARQL query. We can’t go into them here in detail due to space limitations, butthe complete queries and all the required data and code can be found online .This shows that our model is indeed able to capture the needed aspects for ourcompetency questions, but we still need to find out whether these competencyquestions are indeed considered important by the editors, and whether the results https://github.com/LaraHack/linkflows model implementation/tree/master/queries User Study Results.
Out of the total 401 questionnaire requests sent, wereceived a total of 42 answers (10.5%). The importance of the seven competencyquestions for editors and the usefulness of the interface presented to answerthese competency questions, assessed on a Likert scale from 1 to 5 where 1 is not important at all and 5 is very important can be seen in Table 3. We markedwith * the competency questions that had a significant p -value ( < ≥
3) and the ones that assign low importanceor usefulness ( < not useful at all to 5 standing for very useful .As we can see from Table 3, this interface was on average considered useful forall of the seven competency questions, with averages ranging from 3.21 to 3.83.The preference for scores of 3 or larger is clearly significant for all of them.A substantial minority of respondents, however, didn’t find our interface usefulleading again to relatively large standard deviation values between 1.06 and 1.19.The free-text feedback field at the end of the questionnaire gave us moreover a12 able 3. Results of the user study with editors.importance of question usefulness of interfaceQuestion AVG MED SD count ∆ count AVG MED SD count ∆ count < ≥ p -value < ≥ p -valueCQ1 3.17 3 1.36 15 27 0.044 * 3.48 4 1.17 9 33 1.36e-4 *CQ2 2.36 2 1.10 24 18 0.860 3.83 4 1.03 5 37 2.22e-7 *CQ3 3.64 4 0.93 5 37 1.36e-4 * 3.40 3.5 1.04 9 33 1.47e-3 *CQ4 3.05 3 1.19 14 28 0.022 * 3.26 3 1.20 14 28 0.022 *CQ5 4.58 5 0.63 0 42 < e-12 * 3.21 3 1.16 9 33 1.36e-4 *CQ6 3.57 4 1.02 6 36 1.41e-6 * 3.43 4 1.06 8 34 3.44e-5 *CQ7 2.79 3 1.12 18 24 0.220 3.62 4 1.03 5 37 2.22e-7 * variety of suggestions for improvement (some of the editors found the colorstoo much, others suggested other ways of grouping the data) but without clearoverall tendencies. Our results show that we can practically represent the different elements of scien-tific communication, such as articles and reviews, in a fine-grained and semanticway with nanopublications. We could show that we thereby can automaticallyanswer a wide range of competency questions in the concrete scenario of editorsin their meta-reviewing task. We found, however, that some of these were notfound to be important, on average, by the editors who participated in our userstudy. Specifically, the questions about how well the review comments cover thedifferent parts of the paper were not found to be important by a majority ofeditors. This could indicate that the article structure in terms of its differentsections is not a good target for measuring the coverage of reviews. For all thequestions, a relatively high variation is observed, which might be hinting at a lackof agreement among editors with respect to how scientific manuscripts shouldbe assessed. This in turn could highlight the importance of more structured andmore open reviewing processes. Irrespective of whether the competency ques-tions are important, the majority of editors found our prototype to be useful toanswer them, although again with a large variation. With our approach focusingon interoperability and openness, however, it is not necessary to design a singleinterface that suits everybody, but we could allow editors to choose from severalalternatives in the future.In summary, we could show that nanopublications might be a suitable for-mat not just for scientific findings but also for their reviewing processes. Theiropen and semantic nature can moreover allow other participants outside of theassigned editor and invited reviewers to contribute with their suggestions andcomments, both before and after publication, while all the provenance needed tounderstand the context of each contribution is recorded. In this way, publicationand reviewing as a whole might become more fluid, more inclusive, and morepowerful. 13 cknowledgements.
This research was partly funded by IOS Press and theNetherlands Institute for Sound and Vision. The authors would like to thankStephanie Delbeque, Maarten Fr¨ohlich, Erwin Verbruggen, Johan Oomen, andJacco van Ossenbruggen for providing their insight and expertise.
References
1. Alberts, B.: Impact factor distortions. Science , 787–787 (2013).https://doi.org/10.1126/science.12403192. Angrosh, M., Cranefield, S., Stanger, N.: Contextual information retrieval in re-search articles: Semantic publishing tools for the research community. SemanticWeb , 261–293 (2014). https://doi.org/0.5555/2786113.27861153. Berners-Lee, T., Hendler, J.: Publishing on the semantic web. Nature , 1023–1024 (2001). https://doi.org/10.1038/350742064. Bezerra, C., Freitas, F., Santana, F.: Evaluating ontologies with competency ques-tions. In: 2013 IEEE/WIC/ACM International Joint Conferences on Web Intelli-gence (WI) and Intelligent Agent Technologies (IAT). vol. 3, pp. 284–285 (2013).https://doi.org/10.1109/WI-IAT.2013.1995. Bucur, C.I., Kuhn, T., Ceolin, D.: Peer reviewing revisited: Assessing re-search with interlinked semantic comments. In: In K-CAP 2019: Proceedings ofthe 10th International Conference on Knowledge Capture. pp. 179–187 (2019).https://doi.org/3360901.33644346. Clark, T.: Next generation scientific publishing and the web of data. Semantic Web , 257–259 (2014). https://doi.org/10.3233/SW-1401397. Dong, P., Loh, M., Mondry, A.: The ’impact factor’ revisited. Biomed DigitalLibraries (2005). https://doi.org/10.1186/1742-5581-2-78. Garfield, E.: Journal impact factor: a brief review. CMAJ , 979–980 (1999)9. Garfield, E.: The history and meaning of the journal impact factor. JAMA ,90–93 (2006). https://doi.org/10.1001/jama.295.1.9010. Groth, P., Gibson, A., Velterop, J.: The anatomy of a nanopublication. InformationServices & Use , 51–56 (2010). https://doi.org/10.3233/ISU-2010-061311. Iorio, A.D., Peroni, S., Vitali, F., Zingoni, J.: Semantic lenses to bring digital andsemantic publishing together. In: Proceedings of the 4th International Conferenceon Linked Science@ISWC. vol. 128, pp. 12–23 (2014)12. Jacob, B., Ortiz, J.: Data.world: A platform for global-scale semantic publishing.In: Proceedings of the ISWC posters & demonstrations and industry tracks co-located with 16th International Semantic Web Conference (ISWC) (2017)13. Kuhn, T.: nanopub-java: A java library for nanopublications. In: Proceedings ofthe 5th Workshop on Linked Science (LISC 2015). vol. 1572, pp. 19–25 (2015)14. Kuhn, T., Barbano, P.E., Nagy, M.L., Krauthammer, M.: Broadening the scope ofnanopublications. In: Extended Semantic Web Conference. pp. 487–5017 (2013).https://doi.org/10.1007/978-3-642-38288-8 3315. Kuhn, T., Dumontier, M.: Trusty uris: Verifiable, immutable, and permanent dig-ital artifacts for linked data. In: Proceedings of the 11th Extended Semantic WebConference (ESWC). vol. 8465, pp. 395–410 (2014). https://doi.org/10.1007/978-3-319-07443-6 2716. Kuhn, T., Dumontier, M.: Making digital artifacts on the web verifiable and re-liable. IEEE Transactions on Knowledge and Data Engineering , 2390–2400(2015). https://doi.org/10.1109/TKDE.2015.2419657
7. Kuhn, T., Dumontier, M.: Genuine semantic publishing. Data Science , 139–154(2017). https://doi.org/10.3233/DS-17001018. Kuhn, T., et al.: Decentralized provenance-aware publishing with nanopublications.PeerJ Computer Science (2016). https://doi.org/10.7717/peerj-cs.7819. Kuhn, T., et al.: Reliable granular references to changing linked data.In: ISWC Lecture Notes in Computer Science. vol. 10587 (2017).https://doi.org/10.1007/978-3-319-68288-4 2620. Landhuis, E.: Scientific literature: Information overload. Nature , 457–458(2016). https://doi.org/10.1038/nj7612-457a21. Lippi, G., et al.: Scientific publishing in the predatory era. ClinicalChemistry and Laboratory Medicine (CCLM) (5), 683–684 (2018).https://doi.org/10.1515/cclm-2017-107922. Mirri, S., et al.: Towards accessible graphs in html-based scientific articles. In: 4thIEEE Annual Consumer Communications & Networking Conference (CCNC). pp.1067–1072 (2017). https://doi.org/10.1109/CCNC.2017.798328723. Opthof, T.: Sense and nonsense about the impact factor. Cardiovascular Research , 1–7 (1997). https://doi.org/10.1016/S0008-6363(96)00215-524. Peroni, S.: Automating semantic publishing. Data Science , 155–173 (2017).https://doi.org/10.3233/DS-17001225. Peroni, S., Shotton, D.: The spar ontologies. In: Proceedings of the17th International Semantic Web Conference. vol. 128, pp. 119–136 (2018).https://doi.org/10.1007/978-3-030-00668-6 826. Peroni, S., et al.: Research articles in simplified html: a web-first for-mat for html-based scholarly articles. PeerJ Computer Science (2017).https://doi.org/10.7717/peerj-cs.13227. Priem, J.: Beyond the paper. Nature , 437–440 (2013).https://doi.org/10.1038/495437a28. Saha, S., Saint, S., Christakis, D.A.: Impact factor: a valid measure of journalquality? JMLA , 42–46 (2003)29. Sateli, B., Witte, R.: From papers to triples: An open source workflow for se-mantic publishing experiments. In: Semantics, Analytics, Visualization. Enhanc-ing Scholarly Data. pp. 39–44. Springer International Publishing, Cham (2016).https://doi.org/10.1007/978-3-319-53637-8 530. Seglen, P.O.: Why the impact factor of journals should not be used for evaluatingresearch. BMJ , 498–502 (1997). https://doi.org/10.1136/bmj.314.7079.49731. Shotton, D.: Semantic publishing: the coming revolution in scientific journal pub-lishing. Learn. Publ. , 85–94 (2009). https://doi.org/10.1087/200920232. Shotton, D.: The five stars of online journal articles - a framework for article evalu-ation. D-Lib Magazine , 457–458 (2012). https://doi.org/10.1045/january2012-shotton33. Shotton, D., Portwin, K., Klyne, G., Miles, A.: Adventures in semantic publish-ing: Exemplar semantic enhancements of a research article. PLoS computationalbiology (2009). https://doi.org/10.1371/journal.pcbi.100036134. Sikos, L.F.: Knowledge Representation with Semantic Web Standards, pp. 11–49.Springer International Publishing, Cham (2017). https://doi.org/10.1007/978-3-319-54066-5 235. Stern, B.M., OShea, E.K.: A proposal for the future of scientificpublishing in the life sciences. PLoS Biol (2), 683–684 (2019).https://doi.org/10.1371/journal.pbio.3000116
6. Wang, P., Rath, M., Deike, M., Qiang, W.: Open peer review: An in-novation in scientific publishing. In: IConference 2016 Proceedings (2016).https://doi.org/10.9776/163156. Wang, P., Rath, M., Deike, M., Qiang, W.: Open peer review: An in-novation in scientific publishing. In: IConference 2016 Proceedings (2016).https://doi.org/10.9776/16315