Bert de Brock
University of Groningen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bert de Brock.
Research Synthesis Methods | 2012
Gert van Valkenhoef; Guobing Lu; Bert de Brock; Hans L. Hillege; Ae Ades; Nicky J. Welton
Mixed treatment comparison (MTC) (also called network meta-analysis) is an extension of traditional meta-analysis to allow the simultaneous pooling of data from clinical trials comparing more than two treatment options. Typically, MTCs are performed using general-purpose Markov chain Monte Carlo software such as WinBUGS, requiring a model and data to be specified using a specific syntax. It would be preferable if, for the most common cases, both could be derived from a well-structured data file that can be easily checked for errors. Automation is particularly valuable for simulation studies in which the large number of MTCs that have to be estimated may preclude manual model specification and analysis. Moreover, automated model generation raises issues that provide additional insight into the nature of MTC. We present a method for the automated generation of Bayesian homogeneous variance random effects consistency models, including the choice of basic parameters and trial baselines, priors, and starting values for the Markov chain(s). We validate our method against the results of five published MTCs. The method is implemented in freely available open source software. This means that performing an MTC no longer requires manually writing a statistical model. This reduces time and effort, and facilitates error checking of the dataset. Copyright
decision support systems | 2013
Gert van Valkenhoef; Tommi Tervonen; Tijs Zwinkels; Bert de Brock; Hans L. Hillege
Clinical trials are the main source of information for the efficacy and safety evaluation of medical treatments. Although they are of pivotal importance in evidence-based medicine, there is a lack of usable information systems providing data-analysis and decision support capabilities for aggregate clinical trial results. This is partly caused by unavailability (i) of trial data in a structured format suitable for re-analysis, and (ii) of a complete data model for aggregate level results. In this paper, we develop a unifying data model that enables the development of evidence-based decision support in the absence of a complete data model. We describe the supported decision processes and show how these are implemented in the open source ADDIS software. ADDIS enables semi-automated construction of meta-analyses, network meta-analyses and benefit-risk decision models, and provides visualization of all results.
Journal of Clinical Epidemiology | 2012
Gert van Valkenhoef; Tommi Tervonen; Jing Hua Zhao; Bert de Brock; Hans L. Hillege; Douwe Postmus
OBJECTIVE To enable multicriteria benefit-risk (BR) assessment of any number of alternative treatments using all available evidence from a network of clinical trials. STUDY DESIGN AND SETTING We design a general method for multicriteria decision aiding with criteria measurements from Mixed Treatment Comparison (MTC) analyses. To evaluate the method, we apply it to BR assessment of four second-generation antidepressants and placebo in the setting of a published peer-reviewed systematic review. RESULTS The analysis without preference information shows that placebo is supported by a wide range of possible preferences. Preference information provided by a clinical expert showed that although treatment with antidepressants is warranted for severely depressed patients, for mildly depressed patients placebo is likely to be the best option. It is difficult to choose between the four antidepressants, and the results of the model indicate a high degree of uncertainty. CONCLUSIONS The designed method enables quantitative BR analysis of alternative treatments using all available evidence from a network of clinical trials. The preference-free analysis can be useful in presenting the results of an MTC considering multiple outcomes.
Statistics and Computing | 2012
Gert van Valkenhoef; Tommi Tervonen; Bert de Brock; Hans L. Hillege
Mixed Treatment Comparisons (MTCs) enable the simultaneous meta-analysis (data pooling) of networks of clinical trials comparing ≥2 alternative treatments. Inconsistency models are critical in MTC to assess the overall consistency between evidence sources. Only in the absence of considerable inconsistency can the results of an MTC (consistency) model be trusted. However, inconsistency model specification is non-trivial when multi-arm trials are present in the evidence structure. In this paper, we define the parameterization problem for inconsistency models in mathematical terms and provide an algorithm for the generation of inconsistency models. We evaluate running-time of the algorithm by generating models for 15 published evidence structures.
Archive | 2001
Herman Balsters; Bert de Brock; Stefan Conrad
For adequately specifying and rapid-prototyping concurrent information systems, we proposed in [AS99] a new form of object oriented (OO) Petri nets. Referred to as Co-nets, this approach allows in particular to conceive such systems as complex autonomous yet cooperating components. Moreover, for coping with intrinsic dynamic evolution in such systems, we have straightforwardly extended this proposal by introducing notions of meta-places, non-instantiated transitions and a two-step evaluated inference rule [Aou00]. The purpose of this paper is to tackle with another crucial dimension characterizing real-world information systems, namely static and dynamic integrity constraints. For this aim, we propose to associate with each component a ‘constraints’ class. To enforce such constraints, we propose an appropriate ‘synchronization’ inference rule that semantically relates ‘constraints’ transitions with intrinsically dependent ones in the associated component. For a more flexible consistency management we enrich this first proposal by an adequate meta-level, where constraints may be dynamically created, modified or deleted. Finally, we show how this proposal covers a large number of constraint subclasses, including life-cycle based constraints and constraints based on complex derived information as view classes.
Information & Software Technology | 2011
Gert van Valkenhoef; Tommi Tervonen; Bert de Brock; Douwe Postmus
Abstract Context Extreme Programming (XP) is one of the most popular agile software development methodologies. XP is defined as a consistent set of values and practices designed to work well together, but lacks practices for project management and especially for supporting the customer role. The customer representative is constantly under pressure and may experience difficulties in foreseeing the adequacy of a release plan. Objective To assist release planning in XP by structuring the planning problem and providing an optimization model that suggests a suitable release plan. Method We develop an optimization model that generates a release plan taking into account story size, business value, possible precedence relations, themes, and uncertainty in velocity prediction. The running-time feasibility is established through computational tests. In addition, we provide a practical heuristic approach to velocity estimation. Results Computational tests show that problems with up to six themes and 50 stories can be solved exactly. An example provides insight into uncertainties affecting velocity, and indicates that the model can be applied in practice. Conclusion An optimization model can be used in practice to enable the customer representative to take more informed decisions faster. This can help adopting XP in projects where plan-driven approaches have traditionally been used.
BMC Medical Informatics and Decision Making | 2012
Gert van Valkenhoef; Tommi Tervonen; Bert de Brock; Hans L. Hillege
BackgroundDecisions concerning drug safety and efficacy are generally based on pivotal evidence provided by clinical trials. Unfortunately, finding the relevant clinical trials is difficult and their results are only available in text-based reports. Systematic reviews aim to provide a comprehensive overview of the evidence in a specific area, but may not provide the data required for decision making.MethodsWe review and analyze the existing information systems and standards for aggregate level clinical trials information from the perspective of systematic review and evidence-based decision making.ResultsThe technology currently used has major shortcomings, which cause deficiencies in the transfer, traceability and availability of clinical trials information. Specifically, data available to decision makers is insufficiently structured, and consequently the decisions cannot be properly traced back to the underlying evidence. Regulatory submission, trial publication, trial registration, and systematic review produce unstructured datasets that are insufficient for supporting evidence-based decision making.ConclusionsThe current situation is a hindrance to policy decision makers as it prevents fully transparent decision making and the development of more advanced decision support systems. Addressing the identified deficiencies would enable more efficient, informed, and transparent evidence-based medical decision making.
cooperative information systems | 2004
Herman Balsters; Bert de Brock
A database federation provides for tight coupling of a collection of heterogeneous legacy databases into a global integrated system. A large problem regarding information quality in database federations concerns achieving and maintaining consistency of the data on the global level of the federation. Integrity constraints are an essential part of any database schema and are aimed at maintaining data consistency in an arbitrary database state. Data inconsistency problems in database federations resulting from the integration of integrity constraints can basically occur in two situations. The first situation pertains to the integration of existing local integrity constraints occurring within component legacy databases into a single global federated schema, whereas the second situation pertains to the introduction of newly-defined additional integrity constraints on the global level of the federation. These situations gives rise to problems in so-called global and local understandability of updates in database federations. We shall describe a semantic framework for specification of federated database schemas based on the UML/OCL data model; UML/OCL will be shown to provide a high-level, coherent, and precise framework in which to specify and analyze integrity constraints in database federations. This paper will tackle the problem of global and local understandability by introducing a new algorithm describing the integration of integrity constraints occurring in local databases. Our algorithm is based on the principle of tight constraining; i.e., integration of local integrity constraints into a single global federated schema takes place without any loss of constraint information. Our algorithm will improve existing algorithms in three aspects: it offers a considerable reduction in complexity; it applies to a larger category of local integrity constraints; and it will result in a global federated schema with a clear maintenance strategy for update operations.
international conference on agile software development | 2010
Gert van Valkenhoef; Tommi Tervonen; Bert de Brock; Douwe Postmus
Extreme Programming (XP) is an agile software development methodology defined through a set of practices and values. Although the value of XP is well-established through various real-life case studies, it lacks practices for project management. In order to enable XP for larger projects, we provide the rolling forecast practice to support product planning, and an optimization model to assist in release planning. We briefly evaluate the new practices with a real-life case study.
6th IFIP TC-11 WG 11.5 conference on integrity and internal control in information systems | 2003
Herman Balsters; Bert de Brock
A database federation provides for tight coupling of a collection of heterogeneous legacy databases into a global integrated system. A major problem in constructing database federations concerns maintaining quality and consistency of data on the global level of the federation. In particular, the integration of integrity constraints within component legacy databases into a single global schema is a process that is prone to incompleteness and inconsistency. Moreover, additional inter-database constraints between the various component legacy databases often also have to be taken into account to complete the integration process. Our approach to coupling of component databases into a global, integrated system is based on the concept of mediation. Our major result is that mediation in combination with a so-called integration isomorphism integrates component schemas without loss of constraint information; i.e., integrity constraints available at the component level remain intact after integration on the global level of the federated database. Our approach to integration also allows for specification of additional inter-database constraints between the various component legacy databases. We therefore can handle consistency not only on the local level of component databases, but also consistency on the global level between the various component databases. We shall describe a general semantic framework for specification of database federations based on the UML data model. This data model will prove to offer an elegant and powerful framework for analysis and design of database federations, including integration of integrity constraints.