Jakub Klímek
Charles University in Prague
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jakub Klímek.
information integration and web-based applications & services | 2013
Josep Maria Brunetti; Sören Auer; Roberto García; Jakub Klímek
Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data.
Journal of Systems and Software | 2012
Jakub Klímek; Jakub Malý; Irena Mlýnková
XML is de-facto a standard language for data exchange. Structure of XML documents exchanged among different components of a system (e.g. services in a Service-Oriented Architecture) is usually described with XML schemas. It is a common practice that there is not only one but a whole family of XML schemas each applied in a particular logical execution part of the system. In such systems, the design and later maintenance of the XML schemas is not a simple task. In this paper we aim at a part of this problem - evolution of the family of the XML schemas. A single change in user requirements or surrounding environment of the system may influence more XML schemas in the family. A designer needs to identify the XML schemas affected by a change and ensure that they are evolved coherently with each other to meet the new requirement. Doing this manually is very time consuming and error prone. In this paper we show that much of the manual work can be automated. For this, we introduce a technique based on the principles of Model-Driven Development. A designer is required to make a change only once in a conceptual schema of the problem domain and our technique ensures semi-automatic coherent propagation to all affected XML schemas (and vice versa). We provide a formal model of possible evolution changes and their propagation mechanism. We also evaluate the approach on a real-world evolution scenario.
data and knowledge engineering | 2012
Martin Necasky; Irena Mlynkova; Jakub Klímek; Jakub Maly
In this paper we introduce a novel approach to conceptual modeling for XML schemas. Compared to other approaches, it allows for modeling of a whole family of XML schemas related to a particular application domain. It is integrated in a well-established way of software-engineering, namely Model-Driven Development (MDD). It allows software-engineers to naturally model their application domain using a conceptual schema at the platform-independent level of the MDD hierarchy. From there they can design the desired XML schemas in a form of conceptual schemas at the platform-specific level of MDD hierarchy. Schemas at the platform-specific level are then automatically translated to particular XML schemas. Beside this forward-engineering direction, reverse-engineering direction integrating existing XML schemas into the MDD hierarchy is supported as well. We provide several theoretical results which ensure correctness of the introduced approach. We exploit regular tree grammars to formalize XML schemas. We formalize the bindings between the schemas at the two MDD levels and between schemas at the platform-specific level and XML schemas. We prove that conceptual schemas specify the target XML schemas unambiguously. We also prove the expressive power of the conceptual schemas. And, finally, we prove correctness of the introduced translation algorithms between platform-specific and XML schema levels.
extended semantic web conference | 2013
Jakub Klímek; Jiří Helmich
Payola is a framework for Linked Data analysis and visualization. The goal of the project is to provide end users with a tool enabling them to analyze Linked Data in a user-friendly way and without knowledge of SPARQL query language. This goal can be achieved by populating the framework with variety of domain-specific analysis and visualization plugins. The plugins can be shared and reused among the users as well as the created analyses. The analyses can be executed using the tool and the results can be visualized using a variety of visualization plugins. The visualizations can be further customized according to ontologies used in the resulting data. The framework is highly extensible and uses modern technologies such as HTML5 and Scala. In this paper we show two use cases, one general and one from the domain of public procurement.
international semantic web conference | 2016
Jakub Klímek; Petr Škoda
As Linked Data gains traction, the proper support for its publication and consumption is more important than ever. Even though there is a multitude of tools for preparation of Linked Data, they are still either quite limited, difficult to use or not compliant with recent W3C Recommendations. In this demonstration paper, we present LinkedPipes ETL, a lightweight, Linked Data preparation tool. It is focused mainly on smooth user experience including mobile devices, ease of integration based on full API coverage and universal usage thanks to its library of components. We build on our experience gained by development and use of UnifiedViews, our previous Linked Data ETL tool, and present four use cases in which our new tool excels in comparison.
european semantic web conference | 2014
Jiří Helmich; Jakub Klímek
Data Cube represents one of the basic means for storing, processing and analyzing statistical data. Recently, the RDF Data Cube Vocabulary became a W3C recommendation and at the same time interesting datasets using it started to appear. Along with them appeared the need for compatible visualization tools. The Linked Data Visualisation Model is a formalism focused on this area and is implemented by Payola, a framework for analysis and visualization of Linked Data. In this paper, we present capabilities of LDVM and Payola to visualize RDF Data Cubes as well as other statistical datasets not yet compatible with the Data Cube Vocabulary. We also compare our approach to CubeViz, which is a visualization tool specialized on RDF Data Cube visualizations.
advanced information networking and applications | 2011
Jakub Klímek
With the introduction of the SAWSDL W3C recommendation, the possibility of enriching web service interfaces with semantic model references surfaced as a foundation for semantic web services. However, the recommendation says neither what the semantic model should be nor what to do with the actual XML data. In this paper, we exploit our conceptual model for XML data to generate SAWSDL enriched XML schemas, but mainly to automatically generate the so called Lifting and Lowering schema mappings in a form of XSLT scripts. These scripts can be used to transform the XML data produced by the web service into RDF data (lifting) and vice versa (lowering). In the RDF data state the data can be manipulated using a knowledge given by a corresponding ontology mapped to our model. Also the reasoning power granted by the ontology description can be exploited.
edbt icdt workshops | 2010
Jakub Klímek
One of the key problems of most of applications is their development in time. The data structure and operations on data are changing. In a complex environment, the problem is more difficult as the applications use many data sources. Therefore it is necessary to address the problem at all relevant levels of data design and processing. There are many sources of XML data (e.g. web services). XML is also used for integration of heterogeneous systems. A problem arises when there is a request for change of data representation which affects many documents and schemas which describe them. The goal of this doctoral work is to design and implement algorithms for creation of a common conceptual model using an existing set of XML schemas and for addition of new schemas into an existing model. When this is done, the whole system can be evolved easily from one place with automatic propagation of changes.
Computers in Industry | 2014
Jakub Klímek; Jindřich Mynarz; Tomáš Knap; Vojtěch Svátek; Jakub Stárka
Abstract Management of the tendering phase of the public contract lifecycle is a demanding activity with often irrevocable impact on the subsequent realization phase. We investigate the impact of the linked data technology on this process. The public contract information itself can be published as linked data. A specialized vocabulary, the Public Contracts Ontology, was designed for this purpose. Extractors and transformers for public contract datasets in various formats (HTML, CSV, XML) were developed to enable conversion into RDF format corresponding to the vocabulary. Moreover, an application for filing public contracts was implemented. It enables a contracting authority to manage RDF data about itself and its contracts, suppliers to the contracts, to-be-contracted products and services, and actual tenders proposed by bidders. It also provides matchmaking services for finding similar contracts and suitable suppliers for a given call for tenders based on their history, which is a useful feature for contracting authorities.
international conference on web services | 2010
Jakub Klímek
Modern information systems may exploit numerous web services for communication. Each web service may exploit its own XML format for data representation which causes problems with their integration and evolution. Manual integration and management of evolution of the XML formats may be very hard. In this paper, we present a novel method which exploits a conceptual diagram. We introduce an algorithm which helps a domain expert to map the XML formats to the conceptual diagram. It measures similarities between the XML formats and the diagram and adjusts them on the base of the input from the expert. The result is a precise mapping. The diagram then integrates the XML formats and facilitates their evolution - a change can be made only once in the diagram and propagated to the XML formats.