Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nagui Halim is active.

Publication


Featured researches published by Nagui Halim.


ACM Transactions on Database Systems | 1991

Management of a remote backup copy for disaster recovery

Richard P. King; Nagui Halim; Hector Garcia-Molina; Christos A. Polyzois

A remote backup database system tracks the state of a primary system, taking over transaction processing when disaster hits the primary site. The primary and backup sites are physically isolated so that failures at one site are unlikely to propogate to the other. For correctness, the execution schedule at the backup must be equivalent to that at the primary. When the primary and backup sites contain a single processor, it is easy to achieve this property. However, this is harder to do when each site contains multiple processors and sites are connected via multiple communication lines. We present an efficient transaction processing mechanism for multiprocessor systems that guarantees this and other important properties. We also present a database initialization algorithm that copies the database to a backup site while transactions are being processed.


winter simulation conference | 2006

SWORD: scalable and flexible workload generator for distributed data processing systems

Kay S. Anderson; Joseph Phillip Bigus; Eric Bouillet; Parijat Dube; Nagui Halim; Zhen Liu; Dimitrios Pendarakis

Workload generation is commonly employed for performance characterization, testing and benchmarking of computer systems and networks. Workload generation typically aims at simulating or emulating traffic generated by different types of applications, protocols and activities, such as Web browsing, email, chat, as well as stream multimedia traffic. We present a scalable workload generator (SWORD) that we have developed for the testing and benchmarking of high-volume data processing systems. The tool is not only scalable but is also flexible and extensible allowing the generation of workload of a variety of types of applications and of contents


international conference on distributed computing systems | 1990

Overview of disaster recovery for transaction processing systems

Richard P. King; Nagui Halim; Hector Garcia-Molina; Christos A. Polyzois

An overview is given of the major issues involved in maintaining an up-to-date backup copy of a database, kept at a remote site. A method is presented for performing this task without impairing the performance at the primary site. The method is scalable, and it is particularly suitable for multiprocessor systems. The mechanism is relatively straightforward and can be implemented using well-known concepts and techniques, such as locking and logging.<<ETX>>


Ibm Journal of Research and Development | 1984

A new programming methodology for long-lived software systems

Robert E. Strom; Nagui Halim

A new software development methodology based on the language NIL is presented. The methodology emphasizes ( I ) the separation of program development into functional specification and tuning phases, (2) the use of a fully compilable and executable design, (3) an interface definition and verification mechanism. This approach reduces life-cycle costs and improves software quality because (a) errors are detected earlier, and (b) a single functional design can be re-used to produce many implementations.


Ibm Systems Journal | 2008

Harmony: holistic messaging middleware for event-driven systems

Parijat Dube; Nagui Halim; Kyriakos Karenos; Minkyong Kim; Zhen Liu; Srinivasan Parthasarathy; Dimitrios Pendarakis; Hao Yang

In this paper, we present Harmony, a holistic messaging middleware for distributed, event-driven systems. Harmony supports various communication paradigms and heterogeneous networks. The key novelty of Harmony is the unified provision of end-to-end quality of service, security, and resiliency, which shields the applications from the underlying network dynamics, failures, and security configurations. We describe the Harmony architecture in the context of cyber-physical business applications and elaborate on the design of its critical system components, including routing, security, and mobility support.


ieee international symposium on fault tolerant computing | 1998

Software exploitation of a fault-tolerant computer with a large memory

Frank Eskesen; Michel H. T. Hack; Arun Iyengar; Richard P. King; Nagui Halim

The DM/6000 hardware (a prototype, fault-tolerant RS/6000 built at the T.J. Watson Research Center) provides fault tolerance and a large, nonvolatile main memory. Running a commercial, general-purpose operating system on it, of itself, does nothing to increase software availability. In fact, the time to rebuild the contents of a large memory may decrease availability. We describe our techniques for hiding most of the main memory, which requires the operating system to access it only by way of services separate from the operating system. This can allow the memory and those access services to achieve much higher availability, which, in turn, increases the availability of the system as a whole. We also performed simulation studies to determine those conditions where this system organization can lead to improved performance for recoverable database applications.


ieee international symposium on fault tolerant computing | 1993

A case for fault-tolerant memory for transaction processing

Anupam Bhide; Daniel M. Dias; Nagui Halim; T. Basil Smith; Francis Nicholas Parr

For database transaction processing, the authors compare the relative price-performance of storing data in volatile memory (V-mem), fault-tolerant non-volatile memory (FT-mem), and disk. First, they extend Grays five-minute rule, which compares the relative cost of storing data in volatile memory as against disk for read-only data, to read-write data. Second, they show that because of additional write overhead, FT-mem has a higher advantage over V-mem than previously thought. Previous studies comparing volatile and non-volatile memories have focused on the response time advantages of putting log data in non-volatile memory. The authors show that there is a direct reduction in disk I/O, which leads to a much larger savings in cost using an FT-mem buffer. Third, the five-minute model is a simple model that assumes knowledge of inter-access times for data items. The authors present a more realistic model that assumes an LRU buffer management policy. They combine this with the recovery time constraint and study the resulting price-performance. It is shown that the use of an FT-mem buffer can lead to a significant benefit in terms of overall price-performance.


distributed systems operations and management | 2000

Scalable Visualization of Event Data

David J. Taylor; Nagui Halim; Joseph L. Hellerstein; Sheng Ma

Monitoring large distributed systems often results in massive quantities of data that must be analyzed in order to yield useful information about the system. This paper describes a task-oriented approach to exploratory analysis that scales to very large event sets and an architecture that supports this process. The process and architecture are motivated through an example of exploratory analysis for problem determination using data from a corporate intranet.


Archive | 2000

Method, computer program product, and system for deriving web transaction performance metrics

Willy W. Chiu; Nagui Halim; Joseph L. Hellerstein; Leroy A. Krueger; W. Nathaniel Mills; Mark S. Squillante


Archive | 2008

Data replica selector

Jinliang Fan; Nagui Halim; Zhen Liu; Dimitrios Pendarakis

Collaboration


Dive into the Nagui Halim's collaboration.

Top Co-Authors

Avatar

Zhen Liu

French Institute for Research in Computer Science and Automation

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge