Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Glen Williams is active.

Publication


Featured researches published by Glen Williams.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1985

Identity verification through keyboard characteristics

David Umphress; Glen Williams

Most personal identity mechanisms in use today are artificial. They require specific actions on the part of the user, many of which are not “friendly”. Ideally, a typist should be able to approach a computer terminal, begin typing, and be identified from keystroke characteristics. Individuals exhibit characteristic cognitive properties when interacting with the computer through a keyboard. By examining the properties of keying patterns, statistics can be compiled that uniquely describe the user. Initially, a reference profile is built to serve as a basis of comparison for future typing samples. The profile consists of the average time interval between keystrokes (mean keystroke latency) as well as a collection of the average times required to strike any two successive keys on the keyboard. Typing samples are scored against the reference profile and a score is calculated assessing the confidence that the same individual typed both the sample and the reference profile. This mechanism has the capability of providing identity surveillance throughout the entire time at the keyboard.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1991

Dynamic identity verification via keystroke characteristics

John J. Leggett; Glen Williams; Mark Usnick; Michael T. Longnecker

The implementation of safeguards for computer security is based on the ability to verify the identity of authorized computer systems users accurately. The most common form of identity verification in use today is the password, but passwords have many poor traits as an access control mechanism. To overcome the many disadvantages of simple password protection, we are proposing the use of the physiological characteristics of keyboard input as a method for verifying user identity. After an overview of the problem and summary of previous efforts, a research study is described which was conducted to determine the possibility of using keystroke characteristics as a means of dynamic identity verification. Unlike static identity verification systems in use today, a verifier based on dynamic keystroke characteristics allows continuous identity verification in real-time throughout the work session. Study results indicate significant promise in the temporal personnel identification problem.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1988

Verifying identity via keystroke characteristics

John J. Leggett; Glen Williams

Abstract This paper reports on an experiment that was conducted to assess the viability of using keystroke digraph latencies (time between two successive keystrokes) as an identity verifier. Basic data are presented and discussed that characterize the class of keystroke digraph latencies that are found to have good potential as static identity verifiers as well as dynamic identity verifiers. Keystroke digraph latencies would be used in conjunction with other security measures to provide a total security package.


ieee visualization | 1997

A visualization of music

Sean M. Smith; Glen Williams

Currently, the most popular method of visualizing music is music notation. Through music notation, an experienced musician can gain an impression of how a particular piece of music sounds simply by looking at the notes on paper. However, most listeners are unfamiliar or uncomfortable with the complex nature of music notation. The goal of this project is to present an alternate method for visualizing music that makes use of color and 3D space. This paper describes one method of visualizing music in 3D space. The implementation of this method shows that music visualization is an effective technique, although it is certainly not the only possible method for accomplishing the task. Throughout the course of this project, several variations and alternative approaches were discussed. The final version of this project reflects the decisions that were made in order to present the best possible representation of music data.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1984

An empirical investigation of voice as an input modality for computer programming

John J. Leggett; Glen Williams

Abstract Recently, automatic speech recognition systems have shown the potential of becoming a useful means of data entry and control. The most successful of these speech recognition systems accept an isolated utterance as input and use a task-oriented syntactically-constrained vocabulary for increased recognition accuracy. At the same time, language-directed editors are beginning to be introduced into the programmers workplace. A language-directed editor is an editor that has knowledge of the underlying syntax (and basic semantics) of a language. Program entry, then, is syntax-driven and program editing may proceed on a syntactic (semantic) basis. This article discusses the design, implementation, and results of a controlled experiment to evaluate voice versus keyboard (the standard input mode) in a language-directed editing environment. Twenty-four subjects inputted and edited program segments under control of a language-directed editor via the two input modes. Measures of speed, accuracy, and efficiency were used to compare these two modes of input. In general, the results showed that the subjects were able to complete more of the input and edit tasks by keyboard (70%) than by voice (50–55%), but that keyboard input had a higher error rate than did voice input. Also, the use of voice was just as efficient as keyboard for the inputting of editing commands. These results must be viewed with the understanding that the subjects were novices with respect to voice input, but were very experienced with keyboard input. In this light, it can be seen that voice holds much promise as mode of input for computer programming.


Computer Graphics Forum | 1998

Adaptive Supersampling in Object Space Using Pyramidal Rays

Jon Genetti; Dan Gordon; Glen Williams

We introduce a new approach to three important problems in ray tracing: antialiasing, distributed light sources, and fuzzy reflections of lights and other surfaces. For antialiasing, our approach combines the quality of supersampling with the advantages of adaptive supersampling. In adaptive supersampling, the decision to partition a ray is taken in image‐space, which means that small or thin objects may be missed entirely. This is particularly problematic in animation, where the intensity of such objects may appear to vary. Our approach is based on considering pyramidal rays (pyrays) formed by the viewpoint and the pixel. We test the proximity of a pyray to the boundary of an object, and if it is close (or marginal), the pyray splits into 4 sub‐pyrays; this continues recursively with each marginal sub‐pyray until the estimated change in pixel intensity is sufficiently small.


symposium on autonomous underwater vehicle technology | 1996

Architecture of the Texas A&M Autonomous Underwater Vehicle Controller

D.M. Barnett; S.R. McClaran; E. Nelson; M. McDermott; Glen Williams

Presents the software and hardware architectures of the autonomous underwater vehicle controller (AUVC) developed at Texas A&M University. It is a controller for a long range, highly reliable UUV. Capabilities include mission planning/replanning, path planning, energy management, collision avoidance, threat detection and evasion, failure diagnosis and recovery, radio communication, navigation, and recovery from its internal faults. In its first version, functions were partitioned among eighteen loosely coupled processes. Rule-based systems performed mission management and fault diagnosis, while algorithmic control systems were used for lower-level control. The original AUVC software was designed for a network of sixteen processors in planar-2 configuration, with redundant communication paths. A software component provided reliable distributed computing. The controller was tested using a simulated generic vehicle that contained twenty-one subsystems. Tests on the Large Diameter UUV (LDUUV), usin a six-processor version of the AUVC, are reported.


symposium on autonomous underwater vehicle technology | 1994

Failure detection in an autonomous underwater vehicle

A. Orrick; M. McDermott; D.M. Barnett; E. Nelson; Glen Williams

A system has been developed for failure detection and identification in the depth and heading control of an AUV. A redundancy management technique was implemented using the CLIPS expert system shell. The term redundancy, as used here, does not mean that sensors are duplicated but that independent values of the same quantity can be calculated by combining data from several different sensors. The rules used for failure detection and identification are presented and discussed. This failure detection scheme was implemented and tested on the simulator for the Texas A&M AUV Controller. Failures were introduced and the performance of the system was evaluated based on its accuracy and time response in correctly detecting and identifying failures. All single failures and most multiple failures were detected and identified correctly. False alarms were avoided by requiring several successive occurrences of an aberration before it was recognized as a failure.


symposium on autonomous underwater vehicle technology | 1996

Development and validation of the Texas A&M University autonomous underwater vehicle controller

E. Nelson; S.R. McClaran; D.M. Barnett; M. McDermott; Glen Williams

This paper discusses the methods and results of the testing procedures used to validate the autonomous underwater vehicle controller at Texas A&M University (TAMU). Work on the controller began in January 1987. US Naval mission objectives drove many of the technical aspects, such as requirements for fault-tolerance and mission specification. A generic unmanned underwater vehicle was configured for controller technology development.


symposium on autonomous underwater vehicle technology | 1994

Submersible control using the linear quadratic Gaussian with loop transfer recovery method

D.L. Juul; M. McDermott; E. Nelson; D.M. Barnett; Glen Williams

This paper describes the development and testing of an automatic control system for heading and depth control of an autonomous underwater vehicle (AUV) using the linear quadratic Gaussian with loop transfer recovery (LQG/LTR) method. The control variables were rudder angle and sternplane angle. The nonlinear equations of motion were linearized about various speeds and control inputs. Based on the resulting linearized model a compensator was developed for each speed and gain scheduling was applied to provide a controller that covered the entire range of submersible speeds. Compensator testing was performed using a computer simulation based on the nonlinear equations of motion and satisfactory performance was obtained.

Collaboration


Dive into the Glen Williams's collaboration.

Researchain Logo
Decentralizing Knowledge