Alan L. Tharp
North Carolina State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alan L. Tharp.
Information Processing and Management | 1977
Frank E. Muth; Alan L. Tharp
Abstract An automatic method for correcting spelling and typing errors from teletypewriter keyboard input is proposed. The computerized correcting process is presented as a heuristic tree search. The correct spellings are stored character-by-character in a psuedo-binary tree. The search examines a small subset of the database (selected branches of the tree) while checking for insertion, substitution, deletion and transposition errors. The correction procedure utilizes the inherent redundancy of natural language. Multiple errors can be handled if at least two correct characters appear between errors. Test results indicate that this approach has the highest error correction accuracy to date.
International Journal of Human-computer Interaction | 1991
Richard Holcomb; Alan L. Tharp
This paper advances an amalgamated model of software usability derived from much of the vast research on the subject. The model organizes that research into seven basic usability principles, their underlying attributes, and associated relative weights. This model of software usability for human‐computer interaction has two primary goals: (a) to allow software designers to make quantitative decisions about which usability attributes should be included in a design, and (b) to provide a usability metric by which software designs can be consistently rated and compared. Because ultimately it is the users of a software system who decide how easy its user interface is to manipulate, 988 users were asked to evaluate the models effectiveness. They were requested to rank the importance of each attribute in the model. Word processing was chosen as the specific interface type. To ensure some similarity in the respondents’ backgrounds, questionnaires were sent to users of a leading word processor. The 332 responses w...
Software - Practice and Experience | 1982
Alan L. Tharp; Kuo-Chung Tai
This paper studies the use of text signatures in string searching. Text signatures are a coded representation of a unit of text formed by hashing substrings into bit positions which are, in turn, set to one. Then instead of searching an entire line of text exhaustively, the text signature may be examined first to determine if complete processing is warranted. A hashing function which minimizes the number of collisions in a signature is described. Experimental results for two signature lengths with both a text file and a program file are given. Analyses of the results and the utility and application of the method conclude the discussion.
IEEE Transactions on Knowledge and Data Engineering | 1994
Marshall D. Brain; Alan L. Tharp
Many current perfect hashing algorithms suffer from the problem of pattern collisions. In this paper, a perfect hashing technique that uses array-based tries and a simple sparse matrix packing algorithm is introduced. This technique eliminates all pattern collisions, and, because of this, it can be used to form ordered minimal perfect hashing functions on extremely large word lists. This algorithm is superior to other known perfect hashing functions for large word lists in terms of function building efficiency, pattern collision avoidance, and retrieval function complexity. It has been successfully used to form an ordered minimal perfect hashing function for the entire 24481 element Unix word list without resorting to segmentation. The item lists addressed by the perfect hashing function formed can be ordered in any manner, including alphabetically, to easily allow other forms of access to the same list. >
Information Systems | 1990
Marshall D. Brain; Alan L. Tharp
Abstract This article presents a simple algorithm for packing sparse 2-D arrays into minimal 1-D arrays in O ( r 2 ) time. Retrieving an element from the packed 1-D array is O (1). This packing algorithm is then applied to create minimal perfect hashing functions for large word lists. Many existing perfect hashing algorithms process large word lists by segmenting them into several smaller lists. The perfect hashing function described in this article has been used to create minimal perfect hashing functions for unsegmented word sets of up to 5000 words. Compared with other current algorithms for perfect hashing, this algorithm is a significant improvement in terms of both time and space efficiency.
Software - Practice and Experience | 1989
Marshall D. Brain; Alan L. Tharp
This article presents a procedure for constructing a near‐perfect hashing function. The procedure, which is a modification of Cichellis algorithm, builds the near‐perfect hashing function sufficiently fast to allow larger word sets to be used than were previously possible. The improved procedure is the result of examining the original algorithm for the causes of its sluggish performance and then modifying them. In doing so an attempt was made to preserve the basic simplicity of th original algorithm. The improved performance comes at the expense of more storage. The six modifications used to improve performance are explained in detail and experimental results are given for word sets of varying sizes.
Information Systems archive | 1988
Yeong-Shou Hsiao; Alan L. Tharp
Abstract-Adaptive hashing is a new file processing scheme which combines the organization of a Bf-Tree with the operational algorithms of order-preserving linear hashing, and in so doing, it fully utilizes the advantages of both. Its performance, which can be controlled by a single parameter, is stable under all circumstances. Its storage utilization is nearly 80% at any time under any circumstances. No other file organization or algorithms are known which attain such stable and predictable performance.
technical symposium on computer science education | 1981
Alan L. Tharp
Much attention has been given to the content of introductory computer science courses, but based upon a perusal of introductory textbooks, it appears that somewhat less attention has been given to the programming exercises to be used in these courses. Programming exercises can be modified to provide a better educational experience for the student. An example of how atypical programming exercises were incorporated into an introductory programming language course is described.
computer software and applications conference | 1989
Richard Holcomb; Alan L. Tharp
The purpose of this study was to develop a basic model of software usability for human-computer interaction that accomplishes two primary goals: to allow software designers to make quantitative decisions about which usability attributes should be included in a design, and to provide a usability metric by which software designs can be consistently rated and compared. The model of software usability presented amalgamates the results from much of the research on usability and organizes it into seven basic usability principles, their underlying attributes, and associated relative weights.<<ETX>>
technical symposium on computer science education | 1982
Alan L. Tharp
With the diversity of high-level programming languages available, selecting the “right” one for a computer science curriculum or course can be a befuddling process. For a multitude of reasons, such as the manner in which students approach problems to the utilization of scarce computing resources, the ramifications of a decision on the choice of a programming language are significant throughout a computer science curriculum. The purpose of this paper is to provide information relevant to the selection process. Particular attention is given to COBOL, FORTRAN, Pascal, PL-1, and Snobol; both qualitative and quantitative factors are considered. The quantitative results were obtained from processing a binary tree insertion and retrieval algorithm in each language. The machine resources used for this algorithm are given for both interpreter and compiler versions of translators for each language.