Christopher J. Fox
Bell Labs
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christopher J. Fox.
annual computer security applications conference | 1999
John P. McDermott; Christopher J. Fox
The relationships between the work products of a security engineering process can be hard to understand, even for persons with a strong technical background but little knowledge of security engineering. Market forces are driving software practitioners who are not security specialists to develop software that requires security features. When these practitioners develop software solutions without appropriate security-specific processes and models, they sometimes fail to produce effective solutions. We have adapted a proven object oriented modeling technique, use cases, to capture and analyze security requirements in a simple way. We call the adaptation an abuse case model. Its relationship to other security engineering work products is relatively simple, from a user perspective.
international acm sigir conference on research and development in information retrieval | 1989
Christopher J. Fox
A stop list, or negative dictionary is a device used in automatic indexing to filter out words that would make poor index terms. Traditionally stop lists are supposed to have included only the most frequently occurring words. In practice, however, stop lists have tended to include infrequently occurring words, and have not included many frequently occurring words. Infrequently occurring words seem to have been included because stop list compilers have not, for whatever reason, consulted empirical studies of word frequencies. Frequently occurring words seem to have been left out for the same reason, and also because many of them might still be important as index terms.This paper reports an exercise in generating a stop list for general text based on the Brown corpus of 1,014,000 words drawn from a broad range of literature in English. We start with a list of tokens occurring more than 300 times in the Brown corpus. From this list of 278 words, 32 are culled on the grounds that they are too important as potential index terms. Twenty-six words are then added to the list in the belief that they may occur very frequently in certain kinds of literature. Finally, 149 words are added to the list because the finite state machine based filter in which this list is intended to be used is able to filter them at almost no cost. The final product is a list of 421 stop words that should be maximally efficient and effective in filtering the most frequently occurring and semantically neutral words in general literature in English.
Information Processing and Management | 1994
Christopher J. Fox; Anany Levitin; Tom Redman
Abstract The rapid proliferation of computer-based information systems is increasing the importance of data quality to both system makers and users. However, there is neither an established framework nor common terminology for investigating data quality. There is not even agreement on what the term “data” means. We lay a foundation for the study of data quality in this paper. In the first part of the paper we discuss five approaches to defining “data” in the literature. We then propose an approach especially conducive to discussing data quality. In the second part of the paper we discuss the most important dimensions of data quality: accuracy, completeness, consistency and currentness. We define these four and several related dimensions and discuss them in detail. We close the paper by outlining several areas for further research on data quality.
Communications of The ACM | 1995
William B. Frakes; Christopher J. Fox
Software reuse is the use of existing software knowledge or artifacts to build new software artifacts. Reuse is sometimes confused with porting. The two are distinguished as follows: Reuse is using an asset in different systems; porting is moving a system across environments or platforms. For example, in Figure 1 a component in System A is shown used again in System B; this is an example of reuse. System A, developed for Environment 1, is shown moved into Environment 2; this is an example of porting.
Annals of Software Engineering | 1998
William B. Frakes; Ruben Prieto-Diaz; Christopher J. Fox
DARE (Domain Analysis and Reuse Environment) is a CASE tool that supports domain analysis – the activity of identifying and documenting the commonalities and variabilities in related software systems. DARE supports the capture of domain information from experts, documents, and code in a domain. Captured domain information is stored in a domain book that will typically contain a generic architecture for the domain and domain-;specific reusable components.
IEEE Transactions on Software Engineering | 1996
William B. Frakes; Christopher J. Fox
The paper presents a failure modes model of parts-based software reuse, and shows how this model can be used to evaluate and improve software reuse processes. The model and the technique are illustrated using survey data about software reuse gathered from 113 people from 29 organizations.
Proceedings of the 2008 workshop on Static analysis | 2008
Michael S. Ware; Christopher J. Fox
A secure coding standard for Java does not exist. Even if a standard did exist, it is not known how well static analysis tools could enforce it. In this work, we show how well eight static analysis tools can identify violations of a comprehensive collection of coding heuristics for increasing the quality and security of Java SE code. A new taxonomy for correlating coding heuristics with the design principles they help to achieve is also described. The taxonomy aims to make understanding, applying, and remembering both principles and heuristics easier. A significant number of secure coding violations, some of which make attacks possible, were not identified by any tool. Even if all eight tools were combined into a single tool, more than half of the violations included in the study would not be identified.
international conference of the chilean computer science society | 1997
William B. Frakes; Ruben Prieto-Diaz; Christopher J. Fox
DARE-COTS (Domain Analysis Research Environment for Commercial Off-The-Shelf software) is a CASE tool that supports domain analysis-the activity of identifying and documenting the commonalities and variabilities in related software systems. DARE-COTS supports the capture of domain information from experts, documents and code in a domain. Captured domain information is stored in a domain book that typically contains a generic architecture for the domain and domain-specific reusable components.
technical symposium on computer science education | 1996
Charles W. Reynolds; Christopher J. Fox
THE NEED FOR A CURRICULUM IN INFORMATION TECHNOLOGY The landscape of computing has changed dramatically for computing users, We routinely hear about the information superhighway, world-wide web and the Internet, multimedia, virtual reality, electronic mail and bulletin boards, groupware, desktop publishing, expert systems and knowledge engineering. Dramatically, these are not topics discussed exclusively by computer scientists and other kinds of computing specialists. These topics are being discussed by the general user community, a community ill-trained to use these technologies fully and effectively. Although the computer has come out of the computing center and is sitting on the desktop, the immense and growing power of these computers is being only partially utilized by users with limited time for learning about and understanding the rapidly changing information technologies available to them. The fastest growing demand in our society is for computer professionals who help people use information technologies to solve their computing problems. This is an unusual computing professional who combines strong technical skills (in networks, multimedia, databases, intelligent systems, and the integration, configuration and management of these) with the human element (identifying user needs, designing friendly user interfaces, establishing local information policies and procedures and integrating evolving standards). Traditional programs in computing have prepared students for jobs as researchers or developers rather than as information technologists. The growth of opportunities in information technology has created the need for programs emphasizing careers in this area. In this paper we propose requirements for a curriculum in Information Technology based on A CM Curriculum ’91. We first use a profile of an information technologist to extend Curriculum’91 to include new knowledge units. We next refine the knowledge units to include learning objectives that specify the level of mastery of each knowledge unit. Information Technology curricula then are formed as a collection of courses covering the set of learning objectives to the specified level of mastery.
Information Processing and Management | 2002
Brian Fox; Christopher J. Fox
This paper presents an algorithm for generating stemmers from text stemmer specification files. A small study shows that the generated stemmers are computationally efficient, often running faster than stemmers custom written to implement particular stemming algorithms. The stemmer specification files are easily written and modified by non-programmers, making it much easier to create a stemmer, or tune a stemmers performance, than would be the case with a custom stemmer program. Stemmer generation is thus also human-resource efficient.