Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Volha Bryl is active.

Publication


Featured researches published by Volha Bryl.


Requirements Engineering | 2009

Designing socio-technical systems: from stakeholder goals to social networks

Volha Bryl; Paolo Giorgini; John Mylopoulos

Software systems are becoming an integral part of everyday life influencing organizational and social activities. This aggravates the need for a socio-technical perspective for requirements engineering, which allows for modelling and analyzing the composition and interaction of hardware and software components with human and organizational actors. In this setting, alternative requirements models have to be evaluated and selected finding a right trade-off between the technical and social dimensions. To address this problem, we propose a tool-supported process of requirements analysis for socio-technical systems, which adopts planning techniques for exploring the space of requirements alternatives and a number of social criteria for their evaluation. We illustrate the proposed approach with the help of a case study, conducted within the context of an EU project.


international conference on move to meaningful internet systems | 2006

Designing cooperative IS: exploring and evaluating alternatives

Volha Bryl; Paolo Giorgini; John Mylopoulos

At the early stages of the cooperative information system development one of the major problems is to explore the space of alternative ways of assignment and delegations of goals among system actors The exploration process should be guided by a number of criteria to determine whether the adopted alternative is good-enough This paper frames the problem of designing actor dependency networks as a multi-agent planning problem and adopts an off-the-shelf planner to offer a tool (P-Tool) that generates alternative actor dependency networks, and evaluates them in terms of metrics derived from Game Theory literature As well, we offer preliminary experimental results on the scalability of the approach.


international semantic web conference | 2014

Detecting Errors in Numerical Linked Data Using Cross-Checked Outlier Detection

Daniel Fleischhacker; Heiko Paulheim; Volha Bryl; Johanna Völker

Outlier detection used for identifying wrong values in data is typically applied to single datasets to search them for values of unexpected behavior. In this work, we instead propose an approach which combines the outcomes of two independent outlier detection runs to get a more reliable result and to also prevent problems arising from natural outliers which are exceptional values in the dataset but nevertheless correct. Linked Data is especially suited for the application of such an idea, since it provides large amounts of data enriched with hierarchical information and also contains explicit links between instances. In a first step, we apply outlier detection methods to the property values extracted from a single repository, using a novel approach for splitting the data into relevant subsets. For the second step, we exploit owl:sameAs links for the instances to get additional property values and perform a second outlier detection on these values. Doing so allows us to confirm or reject the assessment of a wrong value. Experiments on the DBpedia and NELL datasets demonstrate the feasibility of our approach.


Archive | 2014

Linked Open Data -- Creating Knowledge Out of Interlinked Data

Sören Auer; Volha Bryl; Sebastian Tramp

In this introductory chapter we give a brief overview on the Linked Data concept, the Linked Data lifecycle as well as the LOD2 Stack – an integrated distribution of aligned tools which support the whole life cycle of Linked Data from extraction, authoring/creation via enrichment, interlinking, fusing to maintenance. The stack is designed to be versatile; for all functionality we define clear interfaces, which enable the plugging in of alternative third-party implementations. The architecture of the LOD2 Stack is based on three pillars: (1) Software integration and deployment using the Debian packaging system. (2) Use of a central SPARQL endpoint and standardized vocabularies for knowledge base access and integration between the different tools of the LOD2 Stack. (3) Integration of the LOD2 Stack user interfaces based on REST enabled Web Applications. These three pillars comprise the methodological and technological framework for integrating the very heterogeneous LOD2 Stack components into a consistent framework. The Semantic Web activity has gained momentum with the widespread publishing of structured data as RDF. The Linked Data paradigm has therefore evolved from a practical research idea into a very promising candidate for addressing one of the biggest challenges in the area of intelligent information management: the exploitation of the Web as a platform for data and information integration as well as for search and querying. Just as we publish unstructured textual information on the Web as HTML pages and search such information by using keyword-based search engines, we are already able to easily publish structured information, reliably interlink this information with other data published on the Web and search the resulting data space by using more expressive querying beyond simple keyword searches. The Linked Data paradigm has evolved as a powerful enabler for the transition of the current document-oriented Web into a Web of interlinked Data and, ultimately, into the Semantic Web. The term Linked Data here refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the past three years, leading to the creation of a global data space that contains many billions of assertions – the Web of Linked Data (cf. Fig. 1). In that context LOD2 targets a number of research challenges: improve coherence and quality of data published on the Web, close the performance gap between relational and RDF data management, establish trust on the Linked c


AOSE'06 Proceedings of the 7th international conference on Agent-oriented software engineering VII | 2006

Using risk analysis to evaluate design alternatives

Yudistira Asnar; Volha Bryl; Paolo Giorgini

Recently, multi-agent systems have proved to be a suitable approach to the development of real-life information systems. In particular, they are used in the domain of safety critical systems where availability and reliability are crucial. For these systems, the ability to mitigate risk (e.g., failures, exceptional events) is very important. In this paper, we propose to incorporate risk concerns into the process of a multi-agent system design and describe the process of exploring and evaluating design alternatives based on risk-related metrics. We illustrate the proposed approach using an Air Traffic Management case study.


AOIS'06 Proceedings of the 8th international Bi conference on Agent-oriented information systems IV | 2006

ToothAgent: a multi-agent system for virtual communities support

Volha Bryl; Paolo Giorgini; Stefano Fante

People tend to form social networks within geographical areas. This can be explained by the fact that generally geographical localities correspond to common interests (e.g. students located in a university could be interested to buy or sell textbooks adopted for a specific course, to share notes, or just to meet together to play basketball). Cellular phones and more in general mobile devices are currently widely used and represent a big opportunity to support social communities. In this paper, we present a general architecture for multi-agent systems accessible via mobile devices (cellular phones and PDAs), where Bluetooth technology has been adopted to reflect users locality. We illustrate ToothAgent, an implemented prototype of the proposed architecture, and discuss the opportunities offered by the system.


international world wide web conferences | 2014

Learning conflict resolution strategies for cross-language Wikipedia data fusion

Volha Bryl

In order to efficiently use the ever growing amounts of structured data on the web, methods and tools for quality-aware data integration should be devised. In this paper we propose an approach to automatically learn the conflict resolution strategies, which is a crucial step in large-scale data integration. The approach is implemented as an extension of the Sieve data quality assessment and fusion framework. We apply and evaluate our approach on the use case of fusing data from 10 language editions of DBpedia, a large-scale structured knowledge base extracted from Wikipedia. We also propose a method for extracting rich provenance metadata for each DBpedia fact, which is later used in data fusion.


international semantic web conference | 2010

Supporting natural language processing with background knowledge: coreference resolution case

Volha Bryl; Claudio Giuliano; Luciano Serafini; Kateryna Tymoshenko

Systems based on statistical and machine learning methods have been shown to be extremely effective and scalable for the analysis of large amount of textual data. However, in the recent years, it becomes evident that one of the most important directions of improvement in natural language processing (NLP) tasks, like word sense disambiguation, coreference resolution, relation extraction, and other tasks related to knowledge extraction, is by exploiting semantics. While in the past, the unavailability of rich and complete semantic descriptions constituted a serious limitation of their applicability, nowadays, the Semantic Web made available a large amount of logically encoded information (e.g. ontologies, RDF(S)-data, linked data, etc.), which constitutes a valuable source of semantics. However, web semantics cannot be easily plugged into machine learning systems. Therefore the objective of this paper is to define a reference methodology for combining semantic information available in the web under the form of logical theories, with statistical methods for NLP. The major problems that we have to solve to implement our methodology concern (i) the selection of the correct and minimal knowledge among the large amount available in the web, (ii) the representation of uncertain knowledge, and (iii) the resolution and the encoding of the rules that combine knowledge retrieved from Semantic Web sources with semantics in the text. In order to evaluate the appropriateness of our approach, we present an application of the methodology to the problem of intra-document coreference resolution, and we show by means of some experiments on the standard dataset, how the injection of knowledge leads to the improvement of this task performance.


Electronic Government, An International Journal | 2009

Evaluating procedural alternatives: a case study in e-voting

Volha Bryl; Fabiano Dalpiaz; Roberta Ferrario; Andrea Mattioli

This paper describes part of the work within the ProVotE project, whose goal is the introduction of e-voting systems for local elections. The approach is aimed at providing both precise models of the electoral processes and mechanisms for documenting and reasoning on the possible alternative implementations of the procedures. It is based on defining an alternating sequence of models, written using UML and Tropos. The UML is used to represent electoral processes (both existing and future), while Tropos provides a mean to reason and document the decisions taken about how to change the existing procedures to support an electronic election.


international world wide web conferences | 2014

Integrating product data from websites offering microdata markup

Petar Petrovski; Volha Bryl

Large numbers of websites have started to markup their content using standards such as Microdata, Microformats, and RDFa. The marked-up content elements comprise descriptions of people, organizations, places, events, products, ratings, and reviews. This development has accelerated in last years as major search engines such as Google, Bing and Yahoo! use the markup to improve their search results. Embedding semantic markup facilitates identifying content elements on webpages. However, the markup is mostly not as fine-grained as desirable for applications that aim to integrate data from large numbers of websites. This paper discusses the challenges that arise in the task of integrating descriptions of electronic products from several thousand e-shops that offer Microdata markup. We present a solution for each step of the data integration process including Microdata extraction, product classification, product feature extraction, identity resolution, and data fusion. We evaluate our processing pipeline using 1.9 million product offers from 9240 e-shops which we extracted from the Common Crawl 2012, a large public Web corpus.

Collaboration


Dive into the Volha Bryl's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicola Zannone

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marco Montali

Free University of Bozen-Bolzano

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge