Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shan-Hwei Nienhuys-Cheng is active.

Publication


Featured researches published by Shan-Hwei Nienhuys-Cheng.


inductive logic programming | 1997

Distance Between Herbrand Interpretations: A Measure for Approximations to a Target Concept

Shan-Hwei Nienhuys-Cheng

We can use a metric to measure the differences between elements in a domain or subsets of that domain (i.e. concepts). Which particular metric should be chosen, depends on the kind of difference we want to measure. The well known Euclidean metric on ℜn and its generalizations are often used for this purpose, but such metrics are not always suitable for concepts where elements have some structure different from real numbers. For example, in (Inductive) Logic Programming a concept is often expressed as an Herbrand interpretation of some firstorder language. Every element in an Herbrand interpretation is a ground atom which has a tree structure. We start by defining a metric d on the set of expressions (ground atoms and ground terms), motivated by the structure and complexity of the expressions and the symbols used therein. This metric induces the Hausdorff metric h on the set of all sets of ground atoms, which allows us to measure the distance between Herbrand interpretations. We then give some necessary and some sufficient conditions for an upper bound of h between two given Herbrand interpretations, by considering the elements in their symmetric difference.


european conference on machine learning | 1994

Existence and nonexistence of complete refinement operators

Patrick R. J. van der Laag; Shan-Hwei Nienhuys-Cheng

Inductive Logic Programming is a subfield of Machine Learning concerned with the induction of logic programs. In Shapiros Model Inference System — a system that infers theories from examples — the use of downward refinement operators was introduced to walk through an ordered search space of clauses. Downward and upward refinement operators compute specializations and generalizations of clauses respectively. In this article we present the results of our study of completeness and properness of refinement operators for an unrestricted search space of clauses ordered by θ-subsumption. We prove that locally finite downward and upward refinement operators that are both complete and proper for unrestricted search spaces ordered by θ-subsumption do not exist. We also present a complete but improper upward refinement operator. This operator forms a counterpart to Lairds downward refinement operator with the same properties.


Journal of Logic Programming | 1998

Completeness and properness of refinement operators in inductive logic programming

Patrick R.J. van der Laag; Shan-Hwei Nienhuys-Cheng

Abstract Within Inductive Logic Programming, refinement operators compute a set of specializations or generalizations of a clause. They are applied in model inference algorithms to search in a quasi-ordered set for clauses of a logical theory that consistently describes an unknown concept. Ideally, a refinement operator is locally finite, complete , and proper . In this article we show that if an element in a quasi-ordered set 〈 S , ≥〉 has an infinite or incomplete cover set, then an ideal refinement operator for 〈 S , ≥〉 does not exist. We translate the nonexistence conditions to a specific kind of infinite ascending and descending chains and show that these chains exist in unrestricted sets of clauses that are ordered by θ-subsumption. Next we discuss how the restriction to a finite ordered subset can enable the construction of ideal refinement operators. Finally, we define an ideal refinement operator for restricted θ-subsumption ordered sets of clauses.


Journal of Artificial Intelligence Research | 1996

Least generalizations and greatest specializations of sets of clauses

Shan-Hwei Nienhuys-Cheng; Ronald de Wolf

The main operations in Inductive Logic Programming (ILP) are generalization and specialization, which only make sense in a generality order. In ILP, the three most important generality orders are subsumption, implication and implication relative to background knowledge. The two languages used most often are languages of clauses and languages of only Horn clauses. This gives a total of six different ordered languages. In this paper, we give a systematic treatment of the existence or non-existence of least generalizations and greatest specializations of finite sets of clauses in each of these six ordered sets. We survey results already obtained by others and also contribute some answers of our own. Our main new results are, firstly, the existence of a computable least generalization under implication of every finite set of clauses containing at least one nontautologous function-free clause (among other, not necessarily function-free clauses). Secondly, we show that such a least generalization need not exist under relative implication, not even if both the set that is to be generalized and the background knowledge are function-free. Thirdly, we give a complete discussion of existence and non-existence of greatest specializations in each of the six ordered languages.


european conference on machine learning | 1993

Subsumption and Refinement in Model Inference

Patrick R.J. van der Laag; Shan-Hwei Nienhuys-Cheng

In his famous Model Inference System, Shapiro [10] uses socalled refinement operators to replace too general hypotheses by logically weaker ones. One of these refinement operators works in the search space of reduced first order sentences. In this article we show that this operator is not complete for reduced sentences, as he claims. We investigate the relations between subsumption and refinement as well as the role of a complexity measure. We present an inverse reduction algorithm which is used in a new refinement operator. This operator is complete for reduced sentences. Finally, we will relate our new refinement operator with its dual, a generalization operator, and its possible application in model inference using inverse resolution.


Information Systems | 1990

Classification and syntax of constraints in binary semantical networks

Shan-Hwei Nienhuys-Cheng

Abstract We use binary semantical networks to design or to build information systems. A good system should have a good conceptual scheme with well-defined constraints. This article classifies the constraints and defines their syntax so that there is a clear and easy way to express constraints. The constraints language is a generalization of graphical constraints. This article is based on a former article of the author ( Proceedings of the New Generation Computers Conference ) and a Control Data DMRL Report by R. Meersman (1982). Examples come mostly from the hospital case of De Brock and Remmen.


congress of the italian association for artificial intelligence | 1993

Constructing Refinement Operators by Decomposing Logical Implication

Shan-Hwei Nienhuys-Cheng; Patrick R. J. van der Laag; Leendert W. N. van der Torre

Inductive learning models [15] [18] often use a search space of clauses, ordered by a generalization hierarchy. To find solutions in the model, search algorithms use different generalization and specialization operators. In this article we introduce a framework for deconstructing orderings into operators. We will decompose the quasi-ordering induced by logical implication into six increasingly weak orderings. The difference between two successive orderings will be small, and can therefore be understood easily. Using this decomposition, we will describe upward and downward refinement operators for all orderings, including θ-subsumption and logical implication.


ACSC '95 Proceedings of the 1995 Asian Computing Science Conference on Algorithms, Concurrency and Knowledge | 1995

The Equivalence of the Subsumption Theorem and the Refutation-Completeness for Unconstrained Resolution

Shan-Hwei Nienhuys-Cheng; Ronald de Wolf

The subsumption theorem is an important theorem concerning resolution. Essentially, it says that a set of clauses ∑ logically implies a clause C, iff C is a tautology, or a clause D which subsumes C can be derived from ∑ with resolution. It was originally proved in 1967 by Lee in [Lee67]. In Inductive Logic Programming, interest in this theorem is increasing since its independent rediscovery by Bain and Muggleton [BM92]. It provides a quite natural “bridge” between subsumption and logical implication. Unfortunately, a correct formulation and proof of the subsumption theorem are not available. It is not clear which forms of resolution are allowed. In fact, at least one of the current forms of this theorem is false. This causes a lot of confusion.


european conference on machine learning | 1993

Complexity Dimensions and Learnability

Shan-Hwei Nienhuys-Cheng; Mark Polman

In a discussion of the Vapnik Chervonenkis (VC) dimension ([7]), which is closely related to the learnability of concept classes in Valiants PAC-model ([6]), we will give an algorithm to compute it. Furthermore, we will take Natarajans equivalent dimension for well-ordered classes into a more general scheme, by showing that these well-ordered classes happen to satisfy some general condition, which makes it possible to construct for a class a number of equivalent dimensions. We will give this condition, as well as a relatively efficient algorithm for the calculation of one such dimension for well-ordered classes.


mathematical foundations of computer science | 1998

Linear Dynamic Kahn Networks Are Deterministic

Arie de Bruin; Shan-Hwei Nienhuys-Cheng

The (first part of the) Kahn principle states that networks with deterministic nodes are deterministic on the I/O level: for each network, different executions provided with the same input streams deliver the same output stream. The Kahn principle has thus far not been proved for dynamic, non-deterministic networks.

Collaboration


Dive into the Shan-Hwei Nienhuys-Cheng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arie de Bruin

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar

Mark Polman

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

R. de Wolf

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge