Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Geist is active.

Publication


Featured researches published by Robert Geist.


Computer Graphics Forum | 2005

Re-coloring images for gamuts of lower dimension

Robert Geist; Karl Rasche

Two new techniques for the conversion of color images to gray scale images are discussed. The necessary components for producing visually pleasing gray scale images are identified, and the inadequacies of previous methods are discussed. Several examples of the new techniques are included. The techniques are extended to the problem of recoloring images to preserve visual information for color deficient viewers. Results of a perceptual experiment are discussed, showing the advantages of the new techniques over existing techniques.


Journal of Guidance Control and Dynamics | 1986

The Hybrid Automated Reliability Predictor

Joanne Bechta Dugan; Kishor S. Trivedi; Mark Smotherman; Robert Geist

In this paper, we present an overview of the hybrid automated reliability predictor (HARP), under development at Duke and Clemson Universities. The HARP approach to reliability prediction is characterized by a decomposition of the overall model into distinct fault-occurrence/repair and fault /error-handling submodels. The faultoccurrence/repair model can be cast as either a fault tree or as a Markov chain and is solved analytically. Both exponential and Weibull time to failure distributions are allowed. There are a variety of choices available for the specification of the fault/error-handling behavior that may be solved analytically or simulated. Both graphical and textual interfaces are provided to HARP.


ACM Transactions on Computer Systems | 1987

A continuum of disk scheduling algorithms

Robert Geist; Stephen W. Daniel

A continuum of disk scheduling algorithms, V(<italic>R</italic>), having endpoints V(0) = SSTF and V(1) = SCAN, is defined. V(<italic>R</italic>) maintains a current SCAN direction (in or out) and services next the request with the smallest <italic>effective</italic> distance. The effective distance of a request that lies in the current direction is its physical distance (in cylinders) from the read/write head. The effective distance of a request in the opposite direction is its physical distance plus <italic>R</italic> x (total number of cylinders on the disk). By use of simulation methods, it is shown that this definitional continuum also provides a continuum in performance, both with respect to the mean and with respect to the standard deviation of request waiting time. For objective functions that are linear combinations of the two measures, <italic>μ<subscrpt>w</subscrpt></italic> + <italic>ko<subscrpt>w</subscrpt></italic>, intermediate points of the continuum are seen to provide performance uniformly superior to both SSTF and SCAN. A method of implementing V(<italic>R</italic>) and the results of its experimental use in a real system are presented.


IEEE Computer Graphics and Applications | 2005

Detail preserving reproduction of color images for monochromats and dichromats

Karl Rasche; Robert Geist; James Westall

In spite of the ever-increasing prevalence of low-cost, color printing devices, gray-scale printers remain in widespread use. Authors producing documents with color images for any venue must account for the possibility that the color images might be reduced to gray scale before they are viewed. Because conversion to gray scale reduces the number of color dimensions, some loss of visual information is generally unavoidable. Ideally, we can restrict this loss to features that vary minimally within the color image. Nevertheless, with standard procedures in widespread use, this objective is not often achieved, and important image detail is often lost. Consequently, algorithms that convert color images to gray scale in a way that preserves information remain important. Human observers with color-deficient vision may experience the same problem, in that they may perceive distinct colors to be indistinguishable and thus lose image detail. The same strategy that is used in converting color images to gray scale provides a method for recoloring the images to deliver increased information content to such observers.


IEEE Computer | 1990

Reliability estimation of fault-tolerant systems: tools and techniques

Robert Geist; Kishor S. Trivedi

A comparative evaluation of state-of-the-art tools and techniques for estimating the reliability of fault-tolerant computing systems is presented. The theory of reliability estimation is briefly reviewed. Five current approaches are compared in detail: HARP (hybrid automated reliability predictor), SURE (semi-Markov unreliability range estimator), HEIRESS (hierarchical estimation of interval reliability by skewed sampling), SHARPE (symbolic hierarchical automated reliability and performance evaluator), and SAVE (system availability estimator). Particular attention is given to design limitations imposed by underlying model assumptions, on the one hand, and the efficiency and accuracy of the solution techniques employed, on the other hand.<<ETX>>


IEEE Transactions on Reliability | 1983

Decomposition in Reliability Analysis of Fault-Tolerant Systems

Kishor S. Trivedi; Robert Geist

Two important problems which arise in modeling fault-tolerant systems with ultra-high reliability requirements are discussed. 1) Any analytic model of such a system has a large number of states, making the solution computationally intractable. This leads to the need for decomposition techniques. 2) The common assumption of exponential holding times in the states is intolerable while modeling such systems. Approaches to solving this problem are reviewed. A major notion described in the attempt to deal with reliability models with a large number of states is that of behavioral decomposition followed by aggregation. Models of the fault-handling processes are either semi-Markov or simulative in nature, thus removing the usual restrictions of exponential holding times within the coverage model. The aggregate fault-occurrence model is a non-homogeneous Markov chain, thus allowing the times to failure to possess Weibull-like distributions. There are several potential sources of error in this approach to reliability modeling. The decomposition/aggregation process involves the error in estimating the transition parameters. The numerical integration involves discretization and round-off errors. Analysis of these errors and questions of sensitivity of the output (R(t)) to the inputs (failure rates and recovery model parameters) and to the initial system state acquire extreme importance when dealing with ultra-high reliability requirements.


IEEE Transactions on Computers | 1992

Estimation and enhancement of real-time software reliability through mutation analysis

Robert Geist; A. J. Offutt; Frederick C. Harris

A simulation-based method for obtaining numerical estimates of the reliability of N-version, real-time software is proposed. An extended stochastic Petri net is used to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. The distributions of execution times are derived from automatically generated test cases that are based on mutation testing. Since these test cases are designed to reveal software faults, the associated execution times and reliability estimates are likely to be conservative. Experimental results using specifications for NASAs planetary lander control software suggest that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions. Nevertheless, some support for N-version enhancement of high-quality, mutation-tested code is also offered. Mutation analysis could also be valuable in the design of fault-tolerant software systems. >


Computers & Electrical Engineering | 1984

Hybrid reliability modeling of fault-tolerant computer systems

Kishor S. Trivedi; Joanne Bechta Dugan; Robert Geist; Mark Smotherman

Abstract Current technology allows sufficient redundancy in fault-tolerant computer systems to insure that the failure probability due to exhaustion of spares is low. Consequently, the major cause of failure is the inability to correctly detect, isolate, and reconfigure when faults are present. Reliability estimation tools must be flexible enough to accurately model this critical fault-handling behavior and yet remain computationally tractable. This paper discusses reliability modeling techniques based on a behavioral decomposition that provides tractability by separating the reliability model along temporal lines into nearly disjoint fault-occurrence and fault-handling submodels. An Extended Stochastic Petri Net (ESPN) model provides the needed flexibility for representing the fault-handling behavior, while a nonhomogeneous Markov chain accounts for the possibly non-Poisson fault-occurrence behavior. Since the submodels are separate, the ESPN submodel, in which all time constants are of the same order of magnitude, can be simulated. The nonhomogeneous Markov chain is solved analytically, and the result is a hybrid model. The method of coverage factors, used to combine the submodels, is generalized to more accurately reflect the fault-handling effectiveness within the fault-occurrence model. However, due to approximations made in the aggregation of the two submodels and inaccurate estimation of component failure rates and other model parameters, errors can still arise in the subsequent reliability predictions. The accuracy of the model predictions is evaluated analytically, and error bounds on the system reliability are produced. These modeling techniques have been implemented in the HARP (Hybrid Automated Reliability Predictor) program.


eurographics symposium on rendering techniques | 2004

Lattice-Boltzmann lighting

Robert Geist; Karl Rasche; James Westall; Robert J. Schalkoff

A new technique for lighting participating media is suggested. The technique is based on the lattice-Boltzmann method, which is gaining popularity as alternative to finite-element methods for flow computations, due to its ease of implementation and ability to handle complex boundary conditions. A relatively simple, grid-based photon transport model is postulated and then shown to describe, in the limit, a diffusion process. An application to lighting clouds is provided, where cloud densities are generated by combining two well-established techniques. Performance of the new lighting technique is not real-time, but the technique is highly parallel and does offer an ability to easily represent complex scattering events. Sample renderings are included.


IEEE Transactions on Reliability | 1988

Selection of a checkpoint interval in a critical-task environment

Robert Geist; Robert G. Reynolds; James Westall

The selection of an optimal checkpointing strategy has most often been considered in the transaction processing environment where systems are allowed unlimited repairs. In this environment an optimal strategy maximizes the time spent in the normal operating state and consequently the rate of transaction processing. This paper seeks a checkpoint strategy which maximizes the probability of critical-task completion on a system with limited repairs. These systems can undergo failure and repair only until a repair time exceeds a specified threshold, at which time the system is deemed to have failed completely. For such systems, a model is derived which yields the probability of completing the critical task when each checkpoint operation has fixed cost. The optimal number of checkpoints can increase as system reliability improves. The model is extended to include a constraint which enforces timely completion of the critical task. >

Collaboration


Dive into the Robert Geist's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge