Nabil Kamel
University of Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nabil Kamel.
international conference on management of data | 1989
Xian-He Sun; Nabil Kamel; Lionel M. Ni
Computing queries from derived relations, optimizing queries from a group of queries, and updating materialized views are important database problems and have attracted much attention. One thing common to these problems is their demand to quickly solve the implication problem — given two predicates &sgr;<subscrpt><italic>Q</italic></subscrpt> and &sgr;<subscrpt>τ</subscrpt>, can &sgr;<subscrpt><italic>Q</italic></subscrpt> imply &sgr;<subscrpt>τ</subscrpt> (&sgr;<subscrpt><italic>Q</italic></subscrpt>→&sgr;<subscrpt>τ</subscrpt>)? The implication problem has been solved by converting it into a satisfiability problem. Based on a graph representation, a detailed study of the general implication problem on its own is presented in this paper. We proved that the general implication problem, in which all six comparison operators: =, ≠, <, >, ≤, ≥, as well as conjunctions and disjunctions are allowed, is NP-hard. In the case when “≠” operators are not allowed in &sgr;<subscrpt><italic>Q</italic></subscrpt> and disjunctions are not allowed in &sgr;<subscrpt>τ</subscrpt>, a polynomial time algorithm is proposed to solve this restricted implication problem. The influence of the “≠” operator and disjunctions are studied. Our theoretical results show that for some special cases the polynomial complexity algorithm can solve the implication problem which allows the “≠” operator or disjunctions in the predicates. Necessary conditions for detecting when the “≠” operator and disjunctions are allowed are also given. These results are very useful in creating heuristic methods.
IEEE Transactions on Software Engineering | 1989
Xian-He Sun; Nabil Kamel; Lionel M. Ni
The ability to quickly determine how to derive a given query from a set of prestored fragments is highly demanded in many database appliratlons, especially in distributed database syslems. where the communication cost is a major concern. The main difficulty in snlving this problem lies in the implkation problem-given t w o predicates av and or, can ay imply ar(ap + ar)? The implication problem has been solved by cunverting it into a iatisfiability problem. No detailed study of the implication problem on Its own has been presented. In this paper, we study the general implication problem in which all six comparlson operators: =, #, < , > , S . Z , as well as conjunctions and disjunctions are allowed. We proved that the general implication problem is NP-hard. In the case when ‘‘ # ” operators are not allowed in uv and disjunctions are not allowed In or, a polynomial time algcwithm is proposed to solve this restrkted implication problem. The influence of the ‘‘#” operator and disjunctions are studied. Our theoretical results show that for some special cases the polynomial complexity nlgorithn can solve the implication problem which allows the “ # ” operator or disjunctions in the predicates. Necessary conditions for detecting when the “ # ” operator and disjunctions are allowed are also given. These results are very useful in creating heuristic methods.
ACM Transactions on Database Systems | 1992
Nabil Kamel; Roger King
In this paper a new method to improve the utilization of main memory systems is presented. The new method is based on prestoring in main memory a number of query answers, each evaluated out of a single memory page. To this end, the ideas of page-answers and page-traces are formally described and their properties analyzed. The query model used here allows for selection, projection, join, recursive queries as well as arbitrary combinations. We also show how to apply the approach under update traffic. This concept is especially useful in managing the main memories of an important class of applications. This class includes the evaluation of triggers and alerters, performance improvement of rule-based systems, integrity constraint checking, and materialized views. These applications are characterized by the existence at compile time of a predetermined set of queries, by a slow but persistent update traffic, and by their need to repetitively reevaluate the query set. The new approach represents a new type of intelligent database caching, which contrasts with traditional caching primarily in that the cache elements are derived data and as a consequence, they overlap arbitrarily and do not have a fixed length. The contents of the main memory cache are selected based on the data distribution within the database, the set of fixed queries to preprocess, and the paging characteristics. Page-answers and page-traces are used as the smallest indivisible units in the cache. An efficient heuristic to select a near optimal set of page-answers and page-traces to populate the main memory has been developed, implemented, and tested. Finally, quantitative measurements of performance benefits are reported.
Computer Communications | 1992
Magdi N. Kamel; Nabil Kamel
Abstract The use of database management systems (DBMS) to replace conventional file processing systems has dramatically increased in the past years. Although the use of DBMSs overcomes many of the limitations of file processing systems, many important applications require access to and integration of information among several and often incompatible DBMSs. In this paper we discuss an approach, known as the federated database approach, that allows users and applications to access and manipulate data across several heterogeneous databases while maintaining their autonomy. We discuss the requirements and objectives of a federated database management system, and outline the major issues and challenges for building and using such a system. In particular, we address the design issues from three angles: transaction management, system architecture, and schema integration. Also, we present a five-step integration methodology followed by a comprehensive example to illustrate the concepts and techniques involved in this integration methodology.
very large data bases | 1994
Nabil Kamel; Ping Wu; Stanley Y. W. Su
Several object-oriented database management systems have been implemented without an accompanying theoretical foundation for constraint, query specification, and processing. The pattern-based object calculus presented in this article provides such a theoretical foundation for describing and processing objectoriented databases. We view an object-oriented database as a network of interrelated classes (i.e., the intension) and a collection of time-varying object association patterns (i.e., the extension). The object calculus is based on first-order logic. It provides the formalism for interpreting precisely and uniformly the semantics of queries and integrity constraints in object-oriented databases. The power of the object calculus is shown in four aspects. First, associations among objects are expressed explicitly in an object-oriented database. Second, the “nonassociation” operator is included in the object calculus. Third, set-oriented operations can be performed on both homogeneous and heterogeneous object association patterns. Fourth, our approach does not assume a specific form of database schema. A proposed formalism is also applied to the design of high-level object-oriented query and constraint languages.
international workshop on variable structure systems | 1993
Nabil Kamel
A unified framework for multimedia shared workspaces and their associated floor control that treats both audio and video components uniformly is presented. To this end, a classification scheme is presented which groups the shared workspaces based on multiple orthogonal criteria. For visual displays, this includes the group-orientation of the applications, the organization of the display, and the contiguity relationships among display images. The same classification criteria are applied to the audio component, which considers the group-orientation of the audio channel, the organization of the speaker systems, and the contiguity relationships among the audible ranges of the speakers. The classification also covers the floor control concepts and design approaches. The classification is applied to a known groupware system, MERMAID, as a case study.<<ETX>>
Distributed and Parallel Databases | 1993
Nabil Kamel; Tao Song; Magdi N. Kamel
The requirements of a traditional integration of databases include hiding the heterogeneity among its member databases, preserving their autonomy, supporting controlled data sharing, and providing a user-friendly interface, with a performance comparable to that of a homogeneous distributed database system. In this paper, we describe a variation of the loosely coupled federated database approach that satisfies these requirements and is specifically oriented toward the special characteristics and needs of molecular biology databases. The approach differs from the traditional federated database approach in that it also integrates software tools together with the databases. In addition, the system gives the graphical user interface a more central role in the integration. Thus, the system has all the advantages provided by a visual user environment and is thus more user-friendly than traditional linguistic approaches. The paper also includes a description of XBio, a system which constructs such an integration. The design of XBio ensures robustness and can be customized to the needs of different classes of users.
Distributed and Parallel Databases | 1994
Nabil Kamel
Prestoring redundant data in secondary memory auxiliary databases is an idea that can often yield improved retrieval performance through better clustering of related data. The clusters can be based on either whole query results or, as this paper indicates, on more specialized units called page-queries. The deliberate redundancy introduced by the designer is typically accompanied by much unnecessary redundancy among the elements of the auxiliary database. This paper presents algorithms for efficiently removing unwanted redundancy in auxiliary databases organized into page-query units. The algorithms presented here extend prior work done for secondary memory compaction in two respects: First, since it is generally not possible to remove all unwanted redundancies, the paper shows how can the compaction be done to remove the most undesirable redundancy from a system performance point-of-view. For example, among the factors considered in determining the worst redundancies are the update behavior and the effects of a particular compaction scheme on memory utilization. Second, unlike traditional approaches for database compaction which aim merely at reducing the storage space, this paper considers the paging characteristics in deciding on an optimal compaction scheme. This is done through the use of page-queries. Simulation results are presented and indicate that page-query compaction results in less storage requirements and more time savings than could be obtained by standard non-page-query compaction.
Proceedings of the 2nd International Conference | 1993
Stanley Y. W. Su; Nabil Kamel
This paper introduces an object-oriented knowledge base management technology which has a number of desirable features. First, an object-oriented semantic association model OSAM* provides general structural constructs to model complex objects and their various types of semantic associations. It also allows the user to define the behavioral properties of objects through user-defined operations and knowledge rules, which results in an active knowledge base management system (KBMS). Second, a pattern-based query language, OQL, allows complex search conditions and constraints to be easily specified. Third, a set of intelligent graphical interface tools greatly eases scientists` tasks in defining and querying complex knowledge bases. Fourth, the system can be extended to meet the changing requirements of applications by extending the modeling capabilities of the data model, and by modifying the structure of system components. Lastly, the efficiency of processing large knowledge bases is achieved by using a transputer-based multiprocessor system and some multi-wavefront parallel processing algorithms. A prototype KBMS with the above features has been developed which runs on IBM and SUN workstations.
Information Sciences | 1993
Nabil Kamel; Roger King
Abstract To estimate the number of tuples satisfying a certain query, a data-distribution model is proposed. The model is based on a discrete approximation of the data space and belongs to the class of nonparametric models. Using texture-analysis techniques applied to the multidimensional data space, it is proposed that a segmentation of this space be obtained as a means of obtaining a discrete approximation. Thus the space is divided into a number of homogeneous regions which can be later queried to obtain good estimates of the size of the response set. To obtain this segmentation, an adaptation of a hierarchical segmentation method from pattern recognition is proposed to extend its applicability from three dimensions to D dimensions. In order to flatten the hierarchical segmentation while maintaining a high accuracy model, the homogeneity of the bit patterns of the segments is assessed. A sampling method is proposed to assess the homogeneity of these bit patterns. The accuracy of this sampling technique is analyzed and in particular, it is shown that only a small sample is needed to provide reasonably accurate estimates.