Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George Loizou is active.

Publication


Featured researches published by George Loizou.


ACM Transactions on Information and System Security | 2003

Administrative scope: A foundation for role-based administrative models

Jason Crampton; George Loizou

We introduce the concept of administrative scope in a role hierarchy and demonstrate that it can be used as a basis for role-based administration. We then develop a family of models for role hierarchy administration (RHA) employing administrative scope as the central concept. We then extend RHA4, the most complex model in the family, to a complete, decentralized model for role-based administration. We show that SARBAC, the resulting role-based administrative model, has significant practical and theoretical advantages over ARBAC97. We also discuss how administrative scope might be applied to the administration of general hierarchical structures, how our model can be used to reduce inheritance in the role hierarchy, and how it can be configured to support discretionary access control features.


Archive | 1999

A Guided Tour of Relational Databases and Beyond

Mark Levene; George Loizou

From the Publisher: This book will be of considerable interest to researchers and database practitioners who would like to gain an in-depth understanding of the foundations of modern relational database management systems, which are not presented in more introductory textbooks. It will also serve as a textbook for third year computer science undergraduates and postgraduates studying database systems.Downloading the book in this website lists can give you more advantages. It will show you the best book collections and completed collections. So many books can be found in this website. So, this is not only this a guided tour of relational databases and beyond 1st edition. However, this book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts. This is simple, read the soft file of the book and you get it.


Information Systems | 2003

Why is the snowflake schema a good data warehouse design

Mark Levene; George Loizou

Database design for data warehouses is based on the notion of the snowflake schema and its important special case, the star schema. The snowflake schema represents a dimensional model which is composed of a central fact table and a set of constituent dimension tables which can be further broken up into subdimension tables. We formalise the concept of a snowflake schema in terms of an acyclic database schema whose join tree satisfies certain structural properties. We then define a normal form for snowflake schemas which captures its intuitive meaning with respect to a set of functional and inclusion dependencies. We show that snowflake schemas in this normal form are independent as well as separable when the relation schemas are pairwise incomparable. This implies that relations in the data warehouse can be updated independently of each other as long as referential integrity is maintained. In addition, we show that a data warehouse in snowflake normal form can be queried by joining the relation over the fact table with the relations over its dimension and subdimension tables. We also examine an information-theoretic interpretation of the snowflake schema and show that the redundancy of the primary key of the fact table is zero.


Theoretical Computer Science | 1998

Axiomatisation of functional dependencies in incomplete relations

Mark Levene; George Loizou

Abstract Incomplete relations are relations which contain null values, whose meaning is “value is at present unknown”. Such relations give rise to two types of functional dependency (FD). The first type, called the strong FD (SFD), is satisfied in an incomplete relation if for all possible worlds of this relation the FD is satisfied in the standard way. The second type, called the weak FD (WFD), is satisfied in an incomplete relation if there exists a possible world of this relation in which the FD is satisfied in the standard way. We exhibit a sound and complete axiom system for both strong and weak FDs, which takes into account the interaction between SFDs and WFDs. An interesting feature of the combined axiom system is that it is not k -ary for any natural number k ⩾ 0. We show that the combined implication problem for SFDs and WFDs can be solved in time polynomial in the size of the input set of FDs. Finally, we show that Armstrong relations exist for SFDs and WFDs.


Computer Networks | 2002

A stochastic model for the evolution of the Web

Mark Levene; Trevor I. Fenner; George Loizou; Richard Wheeldon

Abstract Recently several authors have proposed stochastic models of the growth of the Web graph that give rise to power-law distributions. These models are based on the notion of preferential attachment leading to the “rich get richer” phenomenon. However, these models fail to explain several distributions arising from empirical results, due to the fact that the predicted exponent is not consistent with the data. To address this problem, we extend the evolutionary model of the Web graph by including a non-preferential component, and we view the stochastic process in terms of an urn transfer model. By making this extension, we can now explain a wider variety of empirically discovered power-law distributions provided the exponent is greater than two. These include: the distribution of incoming links, the distribution of outgoing links, the distribution of pages in a Web site and the distribution of visitors to a Web site. A by-product of our results is a formal proof of the convergence of the standard stochastic model (first proposed by Simon).


IEEE Transactions on Knowledge and Data Engineering | 1995

A graph-based data model and its ramifications

Mark Levene; George Loizou

Currently, database researchers are investigating new data models in order to remedy the deficiencies of the flat relational model when applied to nonbusiness applications. Herein we concentrate on a recent graph based data model called the hypernode model. The single underlying data structure of this model is the hypernode which is a digraph with a unique defining label. We present in detail the three components of the model, namely its data structure, the hypernode, its query and update language, called HNQL, and its provision for enforcing integrity constraints. We first demonstrate that the said data model is a natural candidate for formalising hypertext. We then compare it with other graph based data models and with set based data models. We also investigate the expressive power of HNQL. Finally, using the hypernode model as a paradigm for graph based data modelling, we show how to bridge the gap between graph based and set based data models, and at what computational cost this can be done. >


Bit Numerical Mathematics | 1975

A class of Iteration functions for improving, simultaneously, approximations to the zeros of a polynomial

M. R. Farmer; George Loizou

Approximations to all the zeros of a polynomial may be used to find a better approximation to one of the zeros. Using this fact, we present a simple approach to the derivation of a class of Iteration Functions for simultaneously improving approximations to zeros of a polynomial. Convergence properties are studied and computational results are included.


Information Sciences | 1999

A probabilistic approach to navigation in Hypertext

Mark Levene; George Loizou

One of the main unsolved problems confronting Hypertext is the navigation problem, namely the problem of having to know where you are in the database graph representing the structure of a Hypertext database, and knowing how to get to some other place you are searching for in the database graph. Previously we formalised a Hypertext database in terms of a directed graph whose nodes represent pages of information. The notion of a trail, which is a path in the database graph describing some logical association amongst the pages in the trail, is central to our model. We defined a Hypertext Query Language, HQL, over Hypertext databases and showed that in general the navigation problem, i.e. the problem of finding a trail that satisfies a HQL query (technically known as the model checking problem), is NP-complete. Herein we present a preliminary investigation of using a probabilistic approach in order to enhance the efficiency of model checking. The flavour of our investigation is that if we have some additional statistical information about the Hypertext database then we can utilise such information during query processing. We present two different approaches. The first approach utilises the theory of probabilistic automata. In this approach we view a Hypertext database as a probabilistic automaton, which we call a Hypertext probabilistic automaton. In such an automaton we assume that the probability of traversing a link is determined by the usage statistics of that link. We exhibit a special case when the number of trails that satisfy a query is always finite and indicate how to give a finite approximation of answering a query in the general case. The second approach utilises the theory of random Turing machines. In this approach we view a Hypertext database as a probabilistic algorithm, realised via a Hypertext random automaton. In such an automaton we assume that out of a choice of links, traversing any one of them is equally likely. We obtain the lower bound of the probability that a random trail satisfies a query. In principle, by iterating this probabilistic algorithm, associated with the Hypertext database, the probability of finding a trail that satisfies the query can be made arbitrarily large.


ACM Transactions on Database Systems | 1993

Semantics for null extended nested relations

Mark Levene; George Loizou

The nested relational model extends the flat relational model by relaxing the first normal form assumption in order to allow the modeling of complex objects. Much of the previous work on the nested relational model has concentrated on defining the data structures and query language for the model. The work done on integrity constraints in nested relations has mainly focused on characterizing subclasses of nested relations and defining normal forms for nested relations with certain desirable properties. In this paper we define the semantics of nested relations, which may contain null values, in terms of integrity constraints, called null extended data dependencies, which extend functional dependencies and join dependencies encountered in flat relational database theory. We formalize incomplete information in nested relations by allowing only one unmarked generic null value, whose semantics we do not further specify. The motivation for the choice of a generic null is our desire to investigate only fundamental semantics which are common to all unmarked null types. This lead us to define a preorder on nested relations, which allows us to measure the relative information content of nested relations. We also define a procedure, called the extended chase procedure, for testing satisfaction of null extended data dependencies and for making inferences by using these null extended data dependencies. The extended chase procedure is shown to generalize the classical chase procedure, which is of major importance in flat relational database theory. As a consequence of our approach we are able to capture the novel notion of losslessness in nested relations, called herein null extended lossless decomposition. Finally, we show that the semantics of nested relations are a natural extension of the semantics of flat relations.


ACM Transactions on Database Systems | 1999

Database design for incomplete relations

Mark Levene; George Loizou

Although there has been a vast amount of research in the area ofrelational database design, to our knowledge, there has been very little work that considers whether this theory is still valid when relations in the database may be incomplete. When relations are incomplete and thus contain null values the problem of whether satisfaction is additive arises. Additivity is the property of the equivalence of the satisfaction of a set of functional dependencies (FDs) F with the individual satisfaction of each member of F in an incomplete relation. It is well known that in general, satisfaction of FDs is not additive. Previously we have shown that satisfaction is additive if and only if the set of FDs is monodependent. We conclude that monodependence is a fundamental desirable property of a set of FDs when considering incomplete information in relational database design. We show that, when the set of FDs F either satifies the intersection property or the split-freeness property, then the problem of finding an optimum cover of F can be solved in polynomial time in the size of F; in general, this problem is known to be NP-complete. We also show that when F satisfies the split-freeness property then deciding whether there is a superkey of cardinality k or less can be solved in polynomial time in the size of F, since all the keys have the same cardinality. If F only satisfies the intersection property then this problem is NP-complete, as in the general case. Moreover, we show that when F either satisfies the intersection property or the split-freeness property then deciding whether an attribute is prime can be solved in polynomial time in the size of F; in general, this problem is known to be NP-complete. Assume that a relation schema R is an appropriate normal form with respect to a set of FDs F. We show that when F satisfies the intersection property then the notions of second normal form and third normal form are equivalent. We also show that when R is in Boyce-Codd Normal Form (BCNF), then F is monodependent if and only if either there is a unique key for R, or for all keys X for R, the cardinality of X is one less than the number of attributes associated with R. Finally, we tackle a long-standing problem in relational database theory by showing that when a set of FDs F over R satisfies the intersection property, it also satisfies the split-freeness property (i.e., is monodependent), if and only if every lossless join decomposition of R with respect to F is also dependecy preserving. As a corollary of this result we are able to show that when F satisfies the intersection property, it also satisfies the intersection property, it also satisfies the split-freeness property(i.e., is monodependent), if and only if every lossless join decomposition of R, which is in BCNF, is also dependency preserving. Our final result is that when F is monodependent, then there exists a unique optimum lossless join decomposition of R, which is in BCNF, and is also dependency preserving. Furthermore, this ultimate decomposition can be attained in polynomial time in the size of F.

Collaboration


Dive into the George Loizou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steve Counsell

Brunel University London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge