Th.P. van der Weide
Radboud University Nijmegen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Th.P. van der Weide.
Information Systems | 1993
A.H.M. ter Hofstede; Henderik Alex Proper; Th.P. van der Weide
Abstract Conceptual data modelling techniques aim at the representation of data at a high level of abstraction. This implies that conceptual data modelling techniques should not only be capable of naturally representing complex structures, but also the rules (constraints) that must hold for these structures. Contemporary data modelling techniques, however, do not provide a language, which on the one hand has a formal semantics and on the other hand leads to natural looking expressions, for formulating these constraints. In this paper such a language is defined for an existing data modelling technique (PSM), which is a generalisation of object-role models (such as ER or NIAM). In this language not only constraints, but also queries and updates can be expressed on a conceptual level.
data and knowledge engineering | 1993
A.H.M. ter Hofstede; Th.P. van der Weide
Abstract Conceptual data modelling techniques aim at the representation of data at a high level of abstraction. The Conceptualisation Principle states that only those aspects are to be represented that deal with the meaning of the Universe of Discourse. Conventional conceptual data modelling techniques, as e.g. ER or NIAM, have to violate the Conceptualisation Principle when dealing with objects with a complex structure. In order to represent these objects conceptually irrelevant choices have to be made. It is even worse: sometimes the Universe of Discouse has to be adapted to suit the modelling technique. These objects typically occur in domains as meta-modelling, hypermedia and CAD/CAM. In this paper extensions to an existing data modelling technique (NIAM) will be discussed and formally defined, that make it possible to naturally represent objects with complex structures without having to violate the Conceptualisation Principle. These extensions will be motivated from a practical point of view by examples and from a theoretical point of view by a comparison with the expressive power of formal set theory and grammar theory.
Information Systems | 1991
P. van Bommel; A. H. M. ter Hofstede; Th.P. van der Weide
Abstract In this paper we formalize data models that are based on the concept of predicator, the combination of an object type and a role. A very simple model, the Predicator Model, is introduced in a rigid formal way. We introduce the concept of population as an instantiation of an information structure. A primitive manipulation language is defined in the style of relational algebra. Well-known types of constraints are defined in terms of the algebra introduced, as restrictions on populations. They are given more expressive power than is usually the case. Constraints are of central importance for identification purposes. Weak identification ensures identifiability of objects within a specific population, while structural identification ensures identifiability of objects within every population. Different levels of constraint inconsistency are defined and it is shown that the verification of two important levels is NP-complete.
international conference on move to meaningful internet systems | 2006
P. van Bommel; Stijn Hoppenbrouwers; Henderik Alex Proper; Th.P. van der Weide
Formalization of architecture principles by means of ORM and Object Role Calculus (ORC) is explored After a discussion on reasons for formalizing such principles, and of the perceived relationship between principles and (business) rules, two exploratory example formalizations are presented and discussed They concern architecture principles taken from The Open Groups Architecture Framework (TOGAF) It is argued that when using ORM and ORC for formal modelling of architecture principles, the underlying logical principles of the techniques may lead to better insight into the rational structure of the principles Thus, apart from achieving formalization, the quality of the principles as such can be improved.
conference on advanced information systems engineering | 1992
A. H. M. ter Hofstede; Henderik Alex Proper; Th.P. van der Weide
In many non trivial application domains, object types with a complex structure occur. Data modelling techniques which only allow flat structures are not suitable for representing such complex object types. In this paper a general data modelling technique, the Predicator Set Model, is introduced, which is capable of representing complex structures in a natural way.
Information Sciences | 2007
B. van Gils; H. A. Erik Proper; P. van Bommel; Th.P. van der Weide
We use information from the Web for performing our daily tasks more and more often. Locating the right resources that help us in doing so is a daunting task, especially with the present rate of growth of the Web as well as the many different kinds of resources available. The tasks of search engines is to assist us in finding those resources that are apt for our given tasks. In this paper we propose to use the notion of quality as a metric for estimating the aptness of online resources for individual searchers. The formal model for quality as presented in this paper is firmly grounded in literature. It is based on the observations that objects (dubbed artefacts in our work) can play different roles (i.e., perform different functions). An artefact can be of high quality in one role but of poor quality in another. Even more, the notion of quality is highly personal. Our quality-computations for estimating the aptness of resources for searches uses the notion of linguistic variables from the field of fuzzy logic. After presenting our model for quality we also show how manipulation of online resources by means of transformations can influence the quality of these resources.
Information Sciences | 2006
Th.P. van der Weide; P. van Bommel
The incremental searcher satisfaction model for Information Retrieval has been introduced to capture the incremental information value of documents. In this paper, from various cognitive perspectives, searcher requirements are derived in terms of the increment function. Different approaches for the construction of increment functions are identified, such as the individual and the collective approach. Translating the requirements to similarity functions leads to the so-called base similarity features and the monotonicity similarity features. We show that most concrete similarity functions in IR, such as Inclusion, Jaccards, Dices, and Cosine coefficient, and some other approaches to similarity functions, possess the base similarity features. The Inclusion coefficient also satisfies the monotonicity features.
data and knowledge engineering | 1992
P. van Bommel; Th.P. van der Weide
Abstract In this paper we focus on the transformation of a conceptual schema into an internal schema. For a given conceptual schema, quite a number of internal schemata can be derived. This number can be reduced by imposing restrictions on internal schemata. We present a transformation algorithm that can generate internal schemata of several types (including the relational model and the NF 2 model). Guidance parameters are used to impose further restrictions. We harmonise the different types of schemata by extending the conceptual language, such that both the conceptual and the internal models can be represented within the same language.
The Computer Journal | 1992
Th.P. van der Weide; A. H. M. ter Hofstede; P. van Bommel
In this article the Uniquest Algorithm (the quest for uniqueness), defined in the Predicator Model, is discussed in depth. The Predicator Model is a general plateform for object-role models. The Uniquest Algorithm is a constructive formal definition of the semantics of uniqueness constraints. As such, it facilitates the implementation in so-called CASE-tools
International Journal of Approximate Reasoning | 2008
M.A.J. van Gerven; Peter J. F. Lucas; Th.P. van der Weide
Independence of causal influence (ICI) offer a high level starting point for the design of Bayesian networks. However, these models are not as widely applied as they could, as their behavior is often not well-understood. One approach is to employ qualitative probabilistic network theory in order to derive a qualitative characterization of ICI models. In this paper we analyze the qualitative properties of ICI models with binary random variables. Qualitative properties are shown to follow from the characteristics of the Boolean function underlying the model. In addition, it is demonstrated that the theory also allows finding constraints on the model parameters given knowledge of the qualitative properties. This high-level qualitative characterization offers a new way of identifying suitable ICI models and may facilitate their exploitation in developing real-world Bayesian networks.