Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John G. Cleary is active.

Publication


Featured researches published by John G. Cleary.


Communications of The ACM | 1987

Arithmetic coding for data compression

Ian H. Witten; Radford M. Neal; John G. Cleary

The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.


IEEE Transactions on Communications | 1984

Data Compression Using Adaptive Coding and Partial String Matching

John G. Cleary; Ian H. Witten

The recently developed technique of arithmetic coding, in conjunction with a Markov model of the source, is a powerful method of data compression in situations where a linear treatment is inappropriate. Adaptive coding allows the model to be constructed dynamically by both encoder and decoder during the course of the transmission, and has been shown to incur a smaller coding overhead than explicit transmission of the models statistics. But there is a basic conflict between the desire to use high-order Markov models and the need to have them formed quickly as the initial part of the message is sent. This paper describes how the conflict can be resolved with partial string matching, and reports experimental results which show that mixed-case English text can be coded in as little as 2.2 bits/ character with no prior knowledge of the source.


ACM Computing Surveys | 1989

Modeling for text compression

Tim Bell; Ian H. Witten; John G. Cleary

The best schemes for text compression use large models to help them predict which characters will come next. The actual next characters are coded with respect to the prediction, resulting in compression of information. Models are best formed adaptively, based on the text seen so far. This paper surveys successful strategies for adaptive modeling that are suitable for use in practical text compression systems.nThe strategies fall into three main classes: finite-context modeling, in which the last few characters are used to condition the probability distribution for the next one; finite-state modeling, in which the distribution is conditioned by the current state (and which subsumes finite-context modeling as an important special case); and dictionary modeling, in which strings of characters are replaced by pointers into an evolving dictionary. A comparison of different methods on the same sample texts is included, along with an analysis of future research directions.


The Visual Computer | 1988

Analysis of an algorithm for fast ray tracing using uniform space subdivision

John G. Cleary; Geoff Wyvill

Ray tracing is becoming popular as the best method of rendering high quality images from three dimensional models. Unfortunately, the computational cost is high. Recently, a number of authors have reported on ways to speed up this process by means of space subdivision which is used to minimize the number of intersection calculations. We describe such an algorithm together with an analysis of the factors which affect its performance. The critical operation of skipping an empty space subdivision can be done very quickly, using only integer addition and comparison. A theoretical analysis of the algorithm is developed. It shows how the space and time requirements vary with the number of objects in the scene.


Computers & Security | 1988

On the privacy afforded by adaptive text compression

Ian H. Witten; John G. Cleary

Ordinary techniques of text compression provide some degree of privacy for messages being stored or transmitted. First, by recording messages compression protects them from the casual observer. Secondly, by removing redundancy it denies a cryptanalyst the leverage of the normal statistical regularities in natural language. Thirdly, and most important, the best text compression systems use adaptive modeling so that they can take advantage of the characteristics of the text being transmitted. The model acts as a very large key, without which decryption is impossible. Adaptive modeling means that the key depends on the entire text that has been transmitted so far since the time the encoder/decoder system was initialized. This paper introduces the modern approach to text compression and describes a highly effective adaptive method, with particular emphasis on its potential for protecting messages from eavesdroppers. The technique is potentially fast and provides both encryption and data compression.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1986

Graphical display of complex information within a Prolog debugger

Alan Dewar; John G. Cleary

Abstract An interactive Prolog debugger, Dewlap , is described. Output from the debugger is in the form of graphical displays of both the derivation tree and the parameters to procedure calls. The major advantage of such displays is that they allow important information to be displayed prominently and unimportant information to be shrunk so that it is accessible but not distracting. Other advantages include the following: the control flow in Prolog is clearly shown; the control context of a particular call is readily determined; it is easy to find out whether two uninstantiated variables are bound together; and very fine control is possible over debugging and display options. A high level graphics language is provided to allow the user to tailor the graphical display of data structures to particular applications. A number of issues raised by the need to update such displays efficiently and to control their perceived complexity are addressed. The Dewlap system is implemented in Prolog on relatively standard hardware with a central processor running Unix and remote workstations with bit-mapped displays and mice.


Software - Practice and Experience | 1993

Bonsai: a compact representation of trees

John J. Darragh; John G. Cleary; Ian H. Witten

This paper shows how trees can be stored in a very compact form, called ‘Bonsai’, using hash tables. A method is described that is suitable for large trees that grow monotonically within a predefined maximum size limit. Using it, pointers in any tree can be represented within 6 + [log2n] bits per node where n is the maximum number of children a node can have. We first describe a general way of storing trees in hash tables, and then introduce the idea of compact hashing which underlies the Bonsai structure. These two techniques are combined to give a compact representation of trees, and a practical methodology is set out to permit the design of these structures. The new representation is compared with two conventional tree implementations in terms of the storage required per node. Examples of programs that must store large trees within a strict maximum size include those that operate on trie structures derived from natural language text. We describe how the Bonsai technique has been applied to the trees that arise in text compression and adaptive prediction, and include a discussion of the design parameters that work well in practice.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1984

On frequency-based menu-splitting algorithms

Ian H. Witten; John G. Cleary; Saul Greenberg

Abstract If a menu-driven display is to be device-independent, the storage of information must be separated from its presentation by creating menus dynamically. As a first step, this article evaluates menu-construction algorithms for ordered directories whose access profile is specified. The algorithms are evaluated by the average number of selections required to retrieve items. While it is by no means suggested that the system designer should ignore other relevant information (natural groupings of menu items, context in terms of prior selections, and so on), the average selection count provides an unambiguous quantitative criterion by which to evaluate the performance of menu-construction algorithms. Even in this tightly-circumscribed situation, optimal menu construction is surprisingly difficult. If the directory entries are accessed uniformly, theoretical analysis leads to a selection algorithm different from the obvious one of splitting ranges into approximately equal parts at each stage. Analysis is intractable for other distributions, although the performance of menu-splitting algorithms can be bounded. The optimal menu tree can be found by searching, but this is computationally infeasible for any but the smallest problems. Several practical algorithms, which differ in their treatment of rounding in the menu-splitting process and lead in general to quite different menu trees, have been investigated by computer simulation with a Zipf distribution access profile. Surprisingly, their performance is remarkably similar. However, our limited experience with optimal menu trees suggests that these algorithms leave some room for improvement.


Operating Systems Review | 1983

Jade: a distributed software prototyping environment

Ian H. Witten; Graham M. Birtwistle; John G. Cleary; David R. Hill; Danny Levinson; Greg Lomow; Radford M. Neal; Murray Peterson; Brian W. Unger; Brian Wyvill

The Jade research project is aimed at building an environment which comfortably supports the design, construction, and testing of distributed computer systems. This note is an informal project description which delimits the scope of the work and identifies the research problems which are tackled. Some design issues are discussed, and progress to date is described.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1987

Acquisition of uncertain rules in a probabilistic logic

John G. Cleary

The problem of acquiring uncertain rules from examples is considered. The uncertain rules are expressed using a simple probabilistic logic which obeys all the axioms of propositional logic. By using three truth values (true, false, undefined) a consistent expression of contradictory evidence is obtained. As well the logic is able to express the correlations between rules and to deal with uncertain rules where the probabilities of correlations between the rules can be directly computed from examples.

Collaboration


Dive into the John G. Cleary's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian H. Witten

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge