Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Antoon Bronselaer is active.

Publication


Featured researches published by Antoon Bronselaer.


Nucleic Acids Research | 2013

Quorumpeps database: chemical space, microbial origin and functionality of quorum sensing peptides

Evelien Wynendaele; Antoon Bronselaer; Joachim Nielandt; Matthias D’Hondt; Sofie Stalmans; Nathalie Bracke; Frederick Verbeke; Christophe Van de Wiele; Guy De Tré; Bart De Spiegeleer

Quorum-sensing (QS) peptides are biologically attractive molecules, with a wide diversity of structures and prone to modifications altering or presenting new functionalities. Therefore, the Quorumpeps database (http://quorumpeps.ugent.be) is developed to give a structured overview of the QS oligopeptides, describing their microbial origin (species), functionality (method, result and receptor), peptide links and chemical characteristics (3D-structure-derived physicochemical properties). The chemical diversity observed within this group of QS signalling molecules can be used to develop new synthetic bio-active compounds.


Brain Structure & Function | 2012

Brainpeps: the blood–brain barrier peptide database

Sylvia Van Dorpe; Antoon Bronselaer; Joachim Nielandt; Sofie Stalmans; Evelien Wynendaele; Kurt Audenaert; Christophe Van de Wiele; Christian Burvenich; Kathelijne Peremans; Hung Hsuchou; Guy De Tré; Bart De Spiegeleer

Peptides are able to cross the blood–brain barrier (BBB) through various mechanisms, opening new diagnostic and therapeutic avenues. However, their BBB transport data are scattered in the literature over different disciplines, using different methodologies reporting different influx or efflux aspects. Therefore, a comprehensive BBB peptide database (Brainpeps) was constructed to collect the BBB data available in the literature. Brainpeps currently contains BBB transport information with positive as well as negative results. The database is a useful tool to prioritize peptide choices for evaluating different BBB responses or studying quantitative structure–property (BBB behaviour) relationships of peptides. Because a multitude of methods have been used to assess the BBB behaviour of compounds, we classified these methods and their responses. Moreover, the relationships between the different BBB transport methods have been clarified and visualized.


Advances in Nutrition | 2017

Perspective: Essential Study Quality Descriptors for Data from Nutritional Epidemiologic Research

Chen Yang; Mariona Pinart; Patrick Kolsteren; John Van Camp; Nathalie De Cock; Katharina Nimptsch; Tobias Pischon; Eamon Laird; Giuditta Perozzi; Raffaella Canali; Axelle Hoge; Marta Stelmach-Mardas; Lars O. Dragsted; Stéphanie Maria Palombi; Irina Dobre; Jildau Bouwman; Peter Clarys; Fabio Minervini; Maria De Angelis; Marco Gobbetti; Jean Tafforeau; Oscar Coltell; Dolores Corella; Hendrik De Ruyck; Janette Walton; Laura Kehoe; Christophe Matthys; Bernard De Baets; Guy De Tré; Antoon Bronselaer

Pooled analysis of secondary data increases the power of research and enables scientific discovery in nutritional epidemiology. Information on study characteristics that determine data quality is needed to enable correct reuse and interpretation of data. This study aims to define essential quality characteristics for data from observational studies in nutrition. First, a literature review was performed to get an insight on existing instruments that assess the quality of cohort, case-control, and cross-sectional studies and dietary measurement. Second, 2 face-to-face workshops were organized to determine the study characteristics that affect data quality. Third, consensus on the data descriptors and controlled vocabulary was obtained. From 4884 papers retrieved, 26 relevant instruments, containing 164 characteristics for study design and 93 characteristics for measurements, were selected. The workshop and consensus process resulted in 10 descriptors allocated to “study design” and 22 to “measurement” domains. Data descriptors were organized as an ordinal scale of items to facilitate the identification, storage, and querying of nutrition data. Further integration of an Ontology for Nutrition Studies will facilitate interoperability of data repositories.


Fuzzy Sets and Systems | 2017

Evaluating flexible criteria on uncertain data

Robin De Mol; Antoon Bronselaer; Guy De Tré

Abstract Modern information management systems and databases are rapidly becoming better equipped for handling data imperfections. A common imperfection is uncertainty, indicating that a propertys exact value is not known. Ideally, such systems can be queried uniformly using flexible criteria regardless of whether the underlying data are uncertain or not. The result thereof should always be informative and intuitive, and should reflect to what degree the data satisfy the criteria and to which degree this is uncertain. In this work, we present a novel way to evaluate flexible criteria on uncertain data. The result thereof is a distribution of uncertainty over degrees of satisfaction. These so-called suitability distributions are first constructed for possibilistic data. It is shown that they can be used in all scenarios going from regular, crisp criteria on certain data to flexible criteria on uncertain data, and that they seamlessly generalize other alternatives. Importantly, their interpretation is always the same, so they can be used without needing to have prior knowledge regarding the quality of the data. Afterwards their properties and supported operations are given. Next it is shown that they can also be applied more broadly, for example for probabilistic data. Examples illustrate their rich semantics, ease-of-use and broad applicability.


international conference information processing | 2018

Randomness of Data Quality Artifacts

Toon Boeckling; Antoon Bronselaer; Guy De Tré

Quality of data is often measured by counting artifacts. While this procedure is very simple and applicable to many different types of artifacts like errors, inconsistencies and missing values, counts do not differentiate between different distributions of data artifacts. A possible solution is to add a randomness measure to indicate how randomly data artifacts are distributed. It has been proposed to calculate randomness by means of the Lempel-Ziv complexity algorithm, this approach comes with some demerits. Most importantly, the Lempel-Ziv approach assumes that there is some implicit order among data objects and the measured randomness depends on this order. To overcome this problem, a new method is proposed which measures randomness proportionate to the average amount of bits needed to compress the bit matrix matching the artifacts in a database relation by using unary coding. It is shown that this method has several interesting properties that align the proposed measurement procedure with the intuitive perception of randomness.


international conference information processing | 2018

Operational measurement of data quality

Antoon Bronselaer; Joachim Nielandt; Toon Boeckling; Guy De Tré

In this paper, an alternative view on measurement of data quality is proposed. Current procedures for data quality measurement provide information about the extent to which data misrepresent reality. These procedures are descriptive in the sense that they provide us numerical information about the state of data. In many cases, this information is not sufficient to know whether data is fit for the task it was meant for. To bridge that gap, we propose a procedure that measures the operational characteristics of data. In this paper, we devise such a procedure by measuring the cost it takes to make data fit for use. We lay out the basics of this procedure and then provide more details on two essential components: tasks and transformation functions.


International Journal of Approximate Reasoning | 2018

An incremental approach for data quality measurement with insufficient information

Antoon Bronselaer; Joachim Nielandt; G. De Tré

Abstract Recently, a fundamental study on measurement of data quality introduced an ordinal-scaled procedure of measurement. Besides the pure ordinal information about the level of quality, numerical information is induced when considering uncertainty involved during measurement. In the case where uncertainty is modelled as probability, this numerical information is ratio-scaled. An essential property of the mentioned approach is that the application of a measure on a large collection of data can be represented efficiently in the sense that (i) the representation has a low storage complexity and (ii) it can be updated incrementally when new data are observed. However, this property only holds when the evaluation of predicates is clear and does not deal with uncertainty. For some dimensions of quality, this assumption is far too strong and uncertainty comes into play almost naturally. In this paper, we investigate how the presence of uncertainty influences the efficiency of a measurement procedure. Hereby, we focus specifically on the case where uncertainty is caused by insufficient information and is thus modelled by means of possibility theory. It is shown that the amount of data that reaches a certain level of quality, can be summarized as a possibility distribution over the set of natural numbers. We investigate an approximation of this distribution that has a controllable loss of information, allows for incremental updates and exhibits a low space complexity.


BMC Microbiology | 2018

Disbiome database: linking the microbiome to disease

Yorick Janssens; Joachim Nielandt; Antoon Bronselaer; Nathan Debunne; Frederick Verbeke; Evelien Wynendaele; Filip Van Immerseel; Yves-Paul Vandewynckel; Guy De Tré; Bart De Spiegeleer

BackgroundRecent research has provided fascinating indications and evidence that the host health is linked to its microbial inhabitants. Due to the development of high-throughput sequencing technologies, more and more data covering microbial composition changes in different disease types are emerging. However, this information is dispersed over a wide variety of medical and biomedical disciplines.DescriptionDisbiome is a database which collects and presents published microbiota-disease information in a standardized way. The diseases are classified using the MedDRA classification system and the micro-organisms are linked to their NCBI and SILVA taxonomy. Finally, each study included in the Disbiome database is assessed for its reporting quality using a standardized questionnaire.ConclusionsDisbiome is the first database giving a clear, concise and up-to-date overview of microbial composition differences in diseases, together with the relevant information of the studies published. The strength of this database lies within the combination of the presence of references to other databases, which enables both specific and diverse search strategies within the Disbiome database, and the human annotation which ensures a simple and structured presentation of the available data.


flexible query answering systems | 2017

On the Need for Explicit Confidence Assessments of Flexible Query Answers

Guy De Tré; Robin De Mol; Antoon Bronselaer

Flexible query answering systems aim to exploit data collections in a richer way than traditional systems can do. In approaches where flexible criteria are used to reflect user preferences, expressing query satisfaction becomes a matter of degree. Nowadays, it becomes more and more common that data originating from different sources and different data providers are involved in the processing of a single query. Also, data sets can be very large such that not all data within a database or data store can be trusted to the same extent and consequently the results in a query answer can neither be trusted to the same extent. For this reason, data quality assessment becomes an important aspect of query processing. In this paper we discuss the need for explicit data quality assessments of query results. Indeed, To correctly inform users, it is in our opinion essential to communicate not only the satisfaction degrees in a query answer, but also the confidence about these satisfaction degrees as can be derived from data quality assessment. As illustration, we propose a hierarchical approach for query processing and data quality assessment, supporting the computation of as well a satisfaction degree, as its associated confidence degree for each element of the query result. Providing confidence information adds an extra dimension to query processing and leads to more soundly query answers.


Information Sciences | 2017

Handling veracity in multi-criteria decision-making : a multi-dimensional approach

Guy De Tré; Robin De Mol; Antoon Bronselaer

Abstract Decision support systems aim to help a decision maker with selecting the option from a set of available options that best meets her or his needs. In multi-criteria based decision support approaches, a suitability degree is computed for each option, reflecting how suitable that option is considering the preferences of the decision maker. Nowadays, it becomes more and more common that data of different quality, originating from different data sets and different data providers have to be integrated and processed in order to compute the suitability degrees. Also, data sets can be very large such that their data become commonly prone to incompleteness, imprecision and uncertainty. Hence, not all data used for decision making can be trusted to the same extent and consequently, neither the results of computations with such data can be trusted to the same extent. For this reason, data quality assessment becomes an important aspect of a decision making process. To correctly inform the users, it is essential to communicate not only the computed suitability degrees of the available options, but also the confidence about these suitability degrees as can be derived from data quality assessment. In this paper a novel multi-dimensional approach for data quality assessment in multi-criteria decision making, supporting the computation of associated confidence degrees, is presented. Providing confidence information adds an extra dimension to the decision making process and leads to more soundly decisions. The added value of the approach is illustrated with aspects of a geographic decision making process.

Collaboration


Dive into the Antoon Bronselaer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge