P. van Bommel
Radboud University Nijmegen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by P. van Bommel.
Information Systems | 1991
P. van Bommel; A. H. M. ter Hofstede; Th.P. van der Weide
Abstract In this paper we formalize data models that are based on the concept of predicator, the combination of an object type and a role. A very simple model, the Predicator Model, is introduced in a rigid formal way. We introduce the concept of population as an instantiation of an information structure. A primitive manipulation language is defined in the style of relational algebra. Well-known types of constraints are defined in terms of the algebra introduced, as restrictions on populations. They are given more expressive power than is usually the case. Constraints are of central importance for identification purposes. Weak identification ensures identifiability of objects within a specific population, while structural identification ensures identifiability of objects within every population. Different levels of constraint inconsistency are defined and it is shown that the verification of two important levels is NP-complete.
international conference on move to meaningful internet systems | 2006
P. van Bommel; Stijn Hoppenbrouwers; Henderik Alex Proper; Th.P. van der Weide
Formalization of architecture principles by means of ORM and Object Role Calculus (ORC) is explored After a discussion on reasons for formalizing such principles, and of the perceived relationship between principles and (business) rules, two exploratory example formalizations are presented and discussed They concern architecture principles taken from The Open Groups Architecture Framework (TOGAF) It is argued that when using ORM and ORC for formal modelling of architecture principles, the underlying logical principles of the techniques may lead to better insight into the rational structure of the principles Thus, apart from achieving formalization, the quality of the principles as such can be improved.
hawaii international conference on system sciences | 2009
J. Nabukenya; P. van Bommel; Henderik Alex Proper
In this paper, we consider improving collaborative policy making processes. We suggest Collaboration Engineering (CE) as an approach that can be useful in enhancing these processes. However, CE needs a theoretical basis to guide the design. This basis is provided by the quality dimensions and the causal theory. We therefore present a theory that provides an understanding of what makes good policies in policy making. This understanding should lead to design choices that should be taken into account to design quality collaborative policy making processes. To determine the quality dimensions of good policies, we use field exploratory studies and literature in the policy making domain research. Furthermore, we consider cause and effect relationships for these quality dimensions to derive the theory.
international conference on move to meaningful internet systems | 2006
P. van Bommel; Stijn Hoppenbrouwers; Henderik Alex Proper; T. P. van der Weide
We are concerned with a core aspect of the processes of obtaining conceptual models We view such processes as information gathering dialogues, in which strategies may be followed (possibly, imposed) in order to achieve certain modelling goals Many goals and strategies for modelling can be distinguished, but the current discussion concerns meta-model driven strategies, aiming to fulfil modelling goals or obligations that are the direct result of meta-model choices (i.e the chosen modelling language) We provide a rule-based conceptual framework for capturing strategies for modelling, and give examples based on a simplified version of the Object Role Modelling (ORM) meta-model We discuss strategy rules directly related to the meta-model, and additional procedural rules We indicate how the strategies may be used to dynamically set a modelling agenda Finally, we describe a generic conceptual structure for a strategy catalog.
Information Sciences | 2007
B. van Gils; H. A. Erik Proper; P. van Bommel; Th.P. van der Weide
We use information from the Web for performing our daily tasks more and more often. Locating the right resources that help us in doing so is a daunting task, especially with the present rate of growth of the Web as well as the many different kinds of resources available. The tasks of search engines is to assist us in finding those resources that are apt for our given tasks. In this paper we propose to use the notion of quality as a metric for estimating the aptness of online resources for individual searchers. The formal model for quality as presented in this paper is firmly grounded in literature. It is based on the observations that objects (dubbed artefacts in our work) can play different roles (i.e., perform different functions). An artefact can be of high quality in one role but of poor quality in another. Even more, the notion of quality is highly personal. Our quality-computations for estimating the aptness of resources for searches uses the notion of linguistic variables from the field of fuzzy logic. After presenting our model for quality we also show how manipulation of online resources by means of transformations can influence the quality of these resources.
Information Sciences | 2006
Th.P. van der Weide; P. van Bommel
The incremental searcher satisfaction model for Information Retrieval has been introduced to capture the incremental information value of documents. In this paper, from various cognitive perspectives, searcher requirements are derived in terms of the increment function. Different approaches for the construction of increment functions are identified, such as the individual and the collective approach. Translating the requirements to similarity functions leads to the so-called base similarity features and the monotonicity similarity features. We show that most concrete similarity functions in IR, such as Inclusion, Jaccards, Dices, and Cosine coefficient, and some other approaches to similarity functions, possess the base similarity features. The Inclusion coefficient also satisfies the monotonicity features.
data and knowledge engineering | 1992
P. van Bommel; Th.P. van der Weide
Abstract In this paper we focus on the transformation of a conceptual schema into an internal schema. For a given conceptual schema, quite a number of internal schemata can be derived. This number can be reduced by imposing restrictions on internal schemata. We present a transformation algorithm that can generate internal schemata of several types (including the relational model and the NF 2 model). Guidance parameters are used to impose further restrictions. We harmonise the different types of schemata by extending the conceptual language, such that both the conceptual and the internal models can be represented within the same language.
The Computer Journal | 1992
Th.P. van der Weide; A. H. M. ter Hofstede; P. van Bommel
In this article the Uniquest Algorithm (the quest for uniqueness), defined in the Predicator Model, is discussed in depth. The Predicator Model is a general plateform for object-role models. The Uniquest Algorithm is a constructive formal definition of the semantics of uniqueness constraints. As such, it facilitates the implementation in so-called CASE-tools
data and knowledge engineering | 2004
B. van Gils; Henderik Alex Proper; P. van Bommel
In this paper we introduce a conceptual model for information supply which abstracts from enabling technologies such as file types, transport protocols and RDF and DAML+OIL. Rather than focusing on technologies that may be used to actually implement information supply, we focus on the question: what is information supply and how does it relate to the data (resources) found on the Web today. By taking a high level of abstraction we can gain more insight in the information market, compare different views on it and even present the architecture of a prototype retrieval system (Vimes) which uses transformations to deal with the heterogeneity of information supply.
Information & Software Technology | 1994
P. van Bommel; ThP van der Weide; Cb Lucasius
Abstract The focus of this paper is database design using automated database design tools or more general CASE tools. We present a genetic algorithm for the optimization of (internal) database structures, using a multi-criterion objective function. This function expresses conflicting objectives, reflecting the well-known time/space trade-off. This paper shows how the solution space of the algorithm can be set up in the form of tree structures (forests), and how these are encoded by a simple integer assignation. Genetic operators (database transformations) defined in terms of this encoding behave as if they manipulate tree structures. Some basic experimental results produced by a research prototype are presented.