Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leszek Lilien is active.

Publication


Featured researches published by Leszek Lilien.


international conference on data engineering | 1986

Adaptive techniques for distributed query optimization

Clement T. Yu; Leszek Lilien; K. Guh; H. Templeton; David Brill; Arbee L. P. Chen

We propose new adaptive techniques for distributed query optimization. These techniques are divided into two groups: the ones that improve efficiency of query execution (directly) and the ones that improve cost estimations for query execution strategies. Some of the proposed techniques utilize semantic information and knowledge acquisition to adapt to the environment. The latter, in contrast to the former, is not a well-established idea. This is a disturbing fact since knowledge acquisition can give significant improvements in performance of a query optimization algorithm. Performing analysis manually is extrernely time consuming and tedious. Therefore, some learning capacity should be added to the system. Some knowledge acquisition techniques that result in adaptive (dynamic) adjustment to run-time changes are proposed.


international conference on data engineering | 1989

Quasi-partitioning: a new paradigm for transaction execution in partitioned distributed database systems

Leszek Lilien

The quasi-partitioning paradigm of operation for partitioned database systems is discussed in which a broken main link between two partitions can be replaced by a much slower backup link (e.g. a dial-up telephone connection). The paradigm solves the problem of preparation for network partitioning. The quasi-partitioning mode of operation has two primitive operations: creeping retrieval and creeping merge. Creeping retrieval increases data availability by crossing partition boundaries (over backup links) to read foreign data. Similarly, creeping merge improves the degree of partition-consistency by crossing partition boundaries to perform merge actions. A quasi-partitioning protocol consists of an adaptation protocol and a merge protocol. Taxonomies are shown for quasi-partitioning adaptation protocols and for quasi-partitioning merge protocols (for restoring partition-consistency after system reconnection). Since merge protocols and adaptation protocols are interdependent, it is indicated here how these protocols should be paired.<<ETX>>


IEEE Transactions on Software Engineering | 1984

A Scheme for Batch Verification of Integrity Assertions in a Database System

Leszek Lilien; Bharat K. Bhargava

A database management system can ensure the semantic integrity of a database via an integrity control subsystem. A technique for implementation of such a subsystem is proposed. After a database is updated by transactions, its integrity must be verified by evaluation of a set of semantic integrity assertions. For evaluation of an integrity assertion a number of database pages need to be transferred from the secondary storage to the fast memory. Since certain pages may be required for evaluation of different integrity assertions, the order of the evaluation of the integrity assertions determines the total number of pages fetched from the secondary storage. Hence, the schedule for the evaluation determines the cost of the database verification process. We show that the search for an optimal schedule is an NP-hard problem. Four approximation algorithms that find suboptimal schedules are proposed. They are based on the utilization of intersections among sets of pages required for the evaluation of different integrity assertions. The theoretical worst case behaviors of these algorithms are studied. Finally, the algorithms are compared via a simulation study to a naive, random order verification approach. The methods proposed for minimizing the costs of the batch integrity verification also apply to other problems that can be abstracted to the directed traveling salesman optimization problem. For example, the methods are applicable to multiple to multiple-query optimization and to concurrency control via the predicate locks.


international conference on data engineering | 1988

A performance analysis of an optimistic and a basic timestamp-ordering concurrency control algorithms for centralized database systems

Cyril U. Orji; Leszek Lilien; Janusz Hyziak

A study is made of a known implementation of an optimistic concurrency-control algorithm for centralized database systems and improvements are suggested to the algorithm. The authors propose an implementation of an algorithm for the basic timestamp-ordering concurrency control in centralized database systems. The two algorithms are compared by simulation experiments. As expected, the optimistic approach is better for transaction mixes dominated by retrievals. For transaction mixes dominated by updates, the optimistic algorithm spends time performing operations that have a good chance of being voided by earlier conflicting operations. The authors expected the timestamp algorithm to be better in these circumstances but, this is not the case. They attribute this to the fact that transactions used in the experiments are short, and therefore the execution time lost due to an abort is small.<<ETX>>


symposium on reliable distributed systems | 1988

Pessimistic protocols for quasi-partitioned distributed database systems

Leszek Lilien; Thai M. Chung

The authors propose two protocols for transaction processing in quasi-partitioned databases. The protocols are pessimistic in that they permit the execution of update transactions in exactly one partition. The first protocol is defined for a fully partition-replicated database in which every partition contains a copy of every data object. The second protocol is defined for a partially partition-replicated database in which some objects have no copies in some partitions. Both protocols improve their major performance measures linearly with the backup link speed but are not visibly affected by either duration of the partitioning or the database size. This is a desirable property, since the backup link speed is the only controllable parameter.<<ETX>>


conference on scientific computing | 1987

Design issues and an architecture for a heterogenous multidatabase system

Sushil V. Pillai; Ramanatham Gudipati; Leszek Lilien

Many sophisticated computer applications could be significantly simplified if they are built on top of a general-purpose distributed database management system. In spite of much research on distributed database management systems there are only a few homogenous distributed database system architectures, that have reached the development stage. The situation with heterogeneous multidatabase systems, which connect a number of possibly incompatible pre-existing database systems, is even less satisfactory. To understand the complexity of designing a heterogeneous multidatabase system, we have presented some issues that have been a topic of research in this area. We propose a heterogeneous multidatabase system architecture which solves some of the inherent problems of heterogeneity such as reliability, semantic integrity, and protection. The heterogeneous multidatabase system environment that has been considered involves a connection of pre-existing database systems, with a possibility of adding new database systems at any time during system lifetime.


IEEE Transactions on Software Engineering | 1985

Database Integrity Block Construct: Concepts and Design Issues

Leszek Lilien; Bharat K. Bhargava

When a crash occurs in a transaction processing system, the database can enter an unacceptable state. To continue the processing, the recovery system has three tasks: 1) verification of the database state for acceptability, 2) restoration of an acceptable database state, and 3) restoration of an acceptable history of transaction processing. Unfortunately these tasks are not trivial and the computational complexity of the algorithms for most of them is either NP-complete or NP-hard. In this paper we discuss the concepts and design issues of a construct called database integrity block (DIB). The implementation of this construct allows for efficient verification of the database state by employing a set of integrity assertions and restoration of transaction history by utilizing any database restoration technique such as audit trail or differential file. This paper presents approximation algorithms for minimizing the costs of evaluation of integrity assertions by modeling the problem as the directed traveling salesman problem, and presents a methodology to compare the costs of audit trail and differential file techniques for database restoration. The applicability of integrity verification research to the problem of multiple-query optimization is also included.


IEEE Journal on Selected Areas in Communications | 1989

Pessimistic quasipartitioning protocols for distributed database systems

Leszek Lilien; Thai M. Chung

A communication link failure can result in a network partitioning that fragments a distributed database system into isolated parts. If a severed high-speed link (e.g. satellite link) between the partitions can be replaced by a much slower backup link (e.g. a dial-up telephone line), the partitioning becomes a quasipartitioning. Two protocols for transaction processing in quasipartitioned databases are proposed. The protocols are pessimistic in that they permit transactions to be updated in exactly one partition. The first protocol is defined for a fully partition-replicated database in which every partition contains a copy of every data object. The second protocol is defined for a partially partition-replicated database in which some objects have no copies in some partitions. Both protocols improve their major performance measures linearly with the backup link speed but are not visibly affected by duration of the partitioning or the database size. >


Sadhana-academy Proceedings in Engineering Sciences | 1987

Enforcement of data consistency in database systems

Bharat K. Bhargava; Leszek Lilien

The absolute correctness of a database is an ideal goal and can not be guaranteed. Only a lower level of database consistency can be enforced in practice. We discuss the issue of database consistency beginning with identification of correctness criteria for database systems. A taxonomy of methods for verification and restoration of database consistency is used to identify classes of methods with a practical significance. We discuss how fault tolerance (using both general and application-specific system properties) allows us to maintain database consistency in the presence of faults and errors in a database system and how database consistency can be restored after site crashes and network partitionings. A database system can ensure the semantic integrity of a database via verification of a set of integrity assertions. We show how to efficiently verify the integrity of a database state. Finally, batch verification of integrity assertions is presented as one of the promising approaches that use parallelism to speed up the verification.


International Journal of Parallel Programming | 1981

On Optimal Scheduling of Integrity Assertions in a Transaction Processing System

Bharat K. Bhargava; Leszek Lilien

Semantic integrity of a database is guarded by a set of integrity assertions expressed as predicates on database values. The problem of efficient evaluation of integrity assertions in transaction processing systems is considered. Three methods of validation (compile-time, run-time, and post-execution validations) are analyzed in terms of database access costs. The results show that if transactions are executed independently of each other, the cost of compile-time validation is never higher than the cost of run-time validation; in turn the cost of the latter is never higher than the cost of post-execution validation.

Collaboration


Dive into the Leszek Lilien's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ala I. Al-Fuqaha

Western Michigan University

View shared research outputs
Top Co-Authors

Avatar

Clement T. Yu

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Cyril U. Orji

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

David Brill

System Development Corporation

View shared research outputs
Top Co-Authors

Avatar

H. Templeton

System Development Corporation

View shared research outputs
Top Co-Authors

Avatar

Jinghao Xu

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

K. Guh

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ramanatham Gudipati

University of Illinois at Chicago

View shared research outputs
Researchain Logo
Decentralizing Knowledge