Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jack C. Schryver is active.

Publication


Featured researches published by Jack C. Schryver.


Behavior Research Methods Instruments & Computers | 1995

Eye-gaze-contingent control of the computer interface: Methodology and example for zoom detection

Joseph H. Goldberg; Jack C. Schryver

Discrimination of user intent at the computer interface solely from eye gaze can provide a powerful tool, benefiting many applications. An exploratory methodology for discriminating zoom-in, zoom-out, and no-zoom intent was developed for such applications as telerobotics, disability aids, weapons systems, and process control interfaces. Using an eye-tracking system, real-time eye-gaze locations on a display are collected. Using off-line procedures, these data are clustered, using minimum spanning tree representations, and then characterized. The cluster characteristics are fed into a multiple linear discriminant analysis, which attempts to discriminate the zoom-in, zoom-out, and no-zoom conditions. The methodologies, algorithms, and experimental data collection procedure are described, followed by example output from the analysis programs. Although developed specifically for the discrimination of zoom conditions, the methodology has broader potential for discrimination of user intent in other interface operations.


Microbial Ecology | 2006

Application of Nonlinear Analysis Methods for Identifying Relationships Between Microbial Community Structure and Groundwater Geochemistry

Jack C. Schryver; Craig C. Brandt; Susan M. Pfiffner; Anthony V. Palumbo; Aaron D. Peacock; David C. White; James P. McKinley; Philip E. Long

The relationship between groundwater geochemistry and microbial community structure can be complex and difficult to assess. We applied nonlinear and generalized linear data analysis methods to relate microbial biomarkers (phospholipids fatty acids, PLFA) to groundwater geochemical characteristics at the Shiprock uranium mill tailings disposal site that is primarily contaminated by uranium, sulfate, and nitrate. First, predictive models were constructed using feedforward artificial neural networks (NN) to predict PLFA classes from geochemistry. To reduce the danger of overfitting, parsimonious NN architectures were selected based on pruning of hidden nodes and elimination of redundant predictor (geochemical) variables. The resulting NN models greatly outperformed the generalized linear models. Sensitivity analysis indicated that tritium, which was indicative of riverine influences, and uranium were important in predicting the distributions of the PLFA classes. In contrast, nitrate concentration and inorganic carbon were least important, and total ionic strength was of intermediate importance. Second, nonlinear principal components (NPC) were extracted from the PLFA data using a variant of the feedforward NN. The NPC grouped the samples according to similar geochemistry. PLFA indicators of Gram-negative bacteria and eukaryotes were associated with the groups of wells with lower levels of contamination. The more contaminated samples contained microbial communities that were predominated by terminally branched saturates and branched monounsaturates that are indicative of metal reducers, actinomycetes, and Gram-positive bacteria. These results indicate that the microbial community at the site is coupled to the geochemistry and knowledge of the geochemistry allows prediction of the community composition.


Future Generation Computer Systems | 1999

Mining multi-dimensional data for decision support

June Donato; Jack C. Schryver; Gregory C. Hinkel; Richard L. Schmoyer; Michael R. Leuze; Nancy W. Grandy

Abstract Personal bankruptcy is an increasingly common yet little understood phenomenon. Attempts to predict bankruptcy have involved the application of data mining techniques to credit card data. This is a difficult problem, since credit card data is multi-dimensional, consisting of monthly account records and daily transaction records. In this paper, we describe a two-stage approach that combines decision trees and neural networks to predict personal bankruptcy using credit card data.


systems man and cybernetics | 1992

Object-oriented qualitative simulation of human mental models of complex systems

Jack C. Schryver

A qualitative model of an experts mental model of a complex system (advanced nuclear power plant) was developed from the qualitative physics of confluences and simulated using an object-oriented extension to Common LISP (Flavors). Invisible connections for flow compatibility, control connections, iterative propagation, and embedded propagation were among the features provided for derivation of causal ordering. Deterministic output was guaranteed through stochastic state transition. Using a fictitious loop fragment showed changes in flow rate change through a pump. State transition models provided excellent fits to the simulation data and showed that all conditions converged to steady state. Strictly forward (with the flow) propagation facilitated consistency within intermediate pre-equilibrium states and convergence as compared to forward propagation with limited backward propagation. The psychological plausibility of qualitative simulation models was evaluated. A further extension of mythical causality is suggested, for which constraint propagation executes on multiple levels of aggregation. >


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1993

Eye-Gaze Control of the Computer Interface: Discrimination of Zoom Intent

Joseph H. Goldberg; Jack C. Schryver

An analysis methodology and associated experiment were developed to assess whether definable and repeatable signatures of eye-gaze characteristics are evident, preceding a decision to zoom-in, zoom-out, or not to zoom at a computer interface. This user intent discrimination procedure can have broad application in disability aids and telerobotic control. Eye-gaze was collected from 10 subjects in a controlled experiment, requiring zoom decisions. The eye-gaze data were clustered, then fed into a multiple discriminant analysis (MDA) for optimal definition of heuristics separating the zoom-in, zoom-out, and no-zoom conditions. Confusion matrix analyses showed that a number of variable combinations classified at a statistically significant level, but practical significance was more difficult to establish. Composite contour plots demonstrated the regions in parameter space consistently assigned by the MDA to unique zoom conditions. Peak classification occurred at about 1200-1600 msec. Improvements in the methodology to achieve practical real-time zoom control are considered.


Simulation Modelling Practice and Theory | 2012

Metrics for Availability Analysis Using a Discrete Event Simulation Method

Jack C. Schryver; James J. Nutaro; Marvin Jonathan Haire

Abstract The system performance metric “availability” is a central concept with respect to the concerns of a plant’s operators and owners, yet it can be abstract enough to resist explanation at system levels. Hence, there is a need for a system-level metric more closely aligned with a plant’s (or, more generally, a system’s) raison d’etre. Historically, availability of repairable systems – intrinsic, operational, or otherwise – has been defined as a ratio of times. This paper introduces a new concept of availability, called endogenous availability, defined in terms of a ratio of quantities of product yield. Endogenous availability can be evaluated using a discrete event simulation analysis methodology. A simulation example shows that endogenous availability reduces to conventional availability in a simple series system with different processing rates and without intermediate storage capacity, but diverges from conventional availability when storage capacity is progressively increased. It is shown that conventional availability tends to be conservative when a design includes features, such as in – process storage, that partially decouple the components of a larger system.


ieee international conference on high performance computing data and analytics | 1998

Mining Mult-Dimensional Data for Decision Support

June Donato; Jack C. Schryver; Gregory C. Hinkel; Richard L. Schmoyer; Nancy W. Grady; Michael R. Leuze

While it is widely recognized that data can be a valuable resource for any organization, extracting information contained within the data is often a difficult problem. Attempts to obtain information from data may be limited by legacy data storage formats, lack of expert knowledge about the data, difficulty in viewing the data, or the volume of data needing to be processed. The rapidly developing field of Data Mining or Knowledge Data Discovery is a blending of Artificial Intelligence, Statistics, and Human-Computer Interaction. Sophisticated data navigation tools to obtain the information needed for decision support do not yet exist. Each data mining task requires a custom solution that depends upon the character and quantity of the data. This paper presents a two-stage approach for handling the prediction of personal bankruptcy using credit card account data, combining decision tree and artificial neural network technologies. Topics to be discussed include the pre-processing of data, including data cleansing, the filtering of data for pertinent records, and the reduction of data for attributes contributing to the prediction of bankruptcy, and the two steps in the mining process itself.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1994

Experimental validation of navigation workload metrics

Jack C. Schryver

Advanced computer interfaces in the control room provide limited display area, and information is represented in large-scale display networks. Display navigation may generate disorienting effects, require additional resources for window management, and increase memory and data integration requirements. An experiment was conducted using an elementary Safety Parameter Display System for Pressurized Water Reactors to validate fourteen proposed metrics of navigation workload. Participants were asked to monitor one or two parameters, and answer questions after navigating a prescribed distance in the network. Analyses of variance of a modified task load index and subscales (confidence, disorientation, effort) supported the claim that navigation of large-scale display networks can impose additional mental load. Eye-gaze and other objective metrics were not validated, indicating needs for more refined probes and data reduction algorithms.


reliability and maintainability symposium | 2012

The throughput, reliability, availability, maintainability (TRAM) methodology for predicting chemical plant production

James J. Nutaro; Jack C. Schryver; Marvin Jonathan Haire

Fault tree analysis is a method for evaluating reliability and availability in terms of equipment system “states”, but this method does not lend itself easily to the evaluation of equipment interactions through time. This makes fault trees difficult to use for the analysis of systems whose reliability and availability depend on complex interactions between its subsystems. This difficulty is overcome by combining fault trees with discrete event simulation methods. The new TRAM methodology combines models and techniques for the analysis of throughput, availability, reliability, and maintainability into a single approach. This paper describes the TRAM methodology and illustrates it with an application to a chemical processing plant. TRAM combines fault tree analysis at a low level of the system description and discrete event simulation at a higher level to create a new method for analyzing the availability and throughput capacity of material processing plants. Failure and repair data is modeled stochastically by a very flexible type of finite mixture distribution that allows the analyst to separate the effects of different repair strategies, such as the reliance on procurement of off-site (vs. on-site) spare parts. An important application of the TRAM method is to facilitate the design of a plant that tolerates outages of its subsystems in the most efficient way possible. Mitigation strategies including in-process storage, alternate work-flows, availability of spare parts, and design for over-production: all of these can be assessed using the TRAM approach, and it thereby facilitates the design of more robust manufacturing systems. The TRAM methodology enables sophisticated “what-if” analyses of alternative designs, e.g. equipment sets, capacities (tanks sizes), shift schedules, spare parts, etc. to optimize plant design and operation. It is a stochastic, time dependent process that provides probabilities of success (or failure) and confidence bounds on availability and throughput. Finally, the TRAM methodology can help plant managers and owners to focus on the plant production metrics by which they are compensated, and not solely on abstract metrics such as availability. Accordingly, TRAM is potentially a more influential tool in the industry than conventional RAM methods. The TRAM method is based on the discrete event formalism developed by Zeigler et al. [1], and explained further in [2]. In TRAM the plant model is completely separated from the simulation engine and can be specified by input data contained in an XML file. Alternatively, the user can construct connections between subsystem components using a graphical user interface. The GUI is very useful in supporting the verification of the correct mass balance in the model.


Proceedings of the 2012 international workshop on Smart health and wellbeing | 2012

Moving from descriptive to causal analytics: case study of discovering knowledge from us health indicators warehouse

Jack C. Schryver; Mallikarjun Shankar; Songhua Xu

The knowledge management community has introduced a multitude of methods for knowledge discovery on large datasets. In the context of public health intelligence, we integrated and incorporated some of these methods into an analysts workflow that proceeds from the data-centric descriptive level of analysis to the model-centric causal level of reasoning. We show several case studies of the proposed analysts workflow as applied to the US Health Indicators Warehouse (HIW), which is a medium scale, public dataset regarding community health information as collected by the US federal government. In our case studies, we demonstrate a series of visual analytics efforts targeted at the HIW, including visual analysis according to correlation matrices, multivariate outlier analysis, multiple linear regression of Medicare costs, confirmatory factor analysis, and hybrid scatterplot and heatmap visualization for distributions of a group of health indicators. We conclude by sketching a preliminary framework for examining causal dependence hypotheses for future data science research in public health.

Collaboration


Dive into the Jack C. Schryver's collaboration.

Top Co-Authors

Avatar

James J. Nutaro

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Anthony V. Palumbo

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Craig C. Brandt

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mallikarjun Shankar

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Marvin Jonathan Haire

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

June Donato

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Songhua Xu

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Andrew S. Madden

Oak Ridge National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge