Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James P. Fry is active.

Publication


Featured researches published by James P. Fry.


ACM Computing Surveys | 1976

Evolution of Data-Base Management Systems

James P. Fry; Edgar H. Sibley

This paper deals with the history and definitions common to data-base technology. It delimits the objectives of data-base management systems, discusses important concepts, and defines terminology for use by other papers in this issue, traces the development of data-base systems methodology, gives a uniform example, and presents some trends and issues.


ACM Transactions on Database Systems | 1976

Restructuring for large databases: three levels of abstraction

Shamkant B. Navathe; James P. Fry

The development of a powerful restructuring function involves two important components—the unambiguous specification of the restructuring operations and the realization of these operations in a software system. This paper is directed to the first component in the belief that a precise specification will provide a firm foundation for the development of restructuring algorithms and, subsequently, their implementation. The paper completely defines the semantics of the restructuring of tree structured databases. The delineation of the restructuring function is accomplished by formulating three different levels of abstraction, with each level of abstraction representing successively more detailed semantics of the function. At the first level of abstraction, the schema modification, three types are identified—naming, combining, and relating; these three types are further divided into eight schema operations. The second level of abstraction, the instance operations, constitutes the transformations on the data instances; they are divided into group operations such as replication, factoring, union, etc., and group relation operations such as collapsing, refinement, fusion, etc. The final level, the item value operations, includes the actual item operations, such as copy value, delete value, or create a null value.


international conference on management of data | 1972

A developmental model for data translation

James P. Fry; Randall L. Frank; Ernest A. Hershey Iii

A model for generalized data translation is presented. Data translation is defined as “the process whereby data stored in a form that can be processed on one computer (the source file) can be translated into a form (target file) which can be used by the same or different processing systems on a possibly different computer.” Inputs to the Data Translator are the source data and two descriptive languages which drive the translation process. A description of the source and target data is presented to the data translator in a Stored Data Definition Language (SDDL). This description includes both the logical (data structure) aspects of the data as well as the physical (storage structure) aspects. A Translation Definition Language (TDL) is used to define the source to target translation parameters. The data translation model includes several components - the source and target converters, which deal with the storage structure of the data, and a restructurer component which is concerned with changes in the logical structure of the data. A Normal Form of Data is introduced and used to allow the Restructurer to operate independently of the source and target conversion processes. The Normal Form of data provides a means of representing data which is independent of current data structuring dependencies.


international conference on management of data | 1972

An approach to Stored Data Definition and Translation

James P. Fry; Diane C. P. Smith; Robert W. Taylor

The CODASYL Stored Data Definition and Translation Task Group has for the past two years been investigating the major components necessary in a language for data translation. Data translation is defined as the process whereby data that can be processed on one computer (the source file) can be translated into a form (the target file) which can be used by the same or a different processing system on a possibly different computer. Two languages are necessary—a Stored Data Definition Language to characterize the logical structure and physical realization of the source and target files, and a Translation Definition Language for defining how target file data instances are to be derived from source file data instances. This report discusses major components of each language and provides sample statements in terms of an example data translation from a COBOL file to a NIPS/360 file.


Computer Networks | 1976

Distributed data bases: A summary of research

Mark E. Deppe; James P. Fry

The overall objective for distributed data bases is the sharing of data among several distinct but inter-connected computing facilities through an integrating mechanism. A review of the literature indicates that little progress has been made in this area due to the large number of technological problems involved. Some researchers have obtained analytical/theoretical results in the area of physical data allocation under the restrictive assumptions of static and known access patterns and independence between programs and data. The remaining unsolved technological and operational problems include measurement and evaluation techniques, maintenance of multiple image files, and security/privacy. In the next five to ten years significant benefits would accrue if data translation techniques, an integrated data base control system, and the integrated data base schema and physically distributed data issues were investigated.


international conference on management of data | 1974

A data description language approach to file translation

Alan G. Merten; James P. Fry

The basic research on data definition languages and the translation algorithm was supported by the Air Force Office of Scientific Research, Air Force Systems Command, U.S.A.F., under Grant No. AFOSR-72-2219. The development and implementation of the Prototype Data Translator was supported by the Joint Technical Support Activity of the Defense Communications Agency under Contract No. DCA 100-72-C-0019. The authors would like to express their appreciation to Janet Eggleton for her assistance in the technical editing and final preparation of this paper.


very large data bases | 1979

1978 New Orleans Data Base Design Workshop Report

Vincent Y. Lum; Sakti P. Ghosh; Mario Schkolnick; Robert W. Taylor; D. Jefferson; Stanley Y. W. Su; James P. Fry; Toby J. Teorey; B. Yao; D. S. Rund; Beverly K. Kahn; Shamkant B. Navathe; Diane C. P. Smith; L. Aguilar; W. J. Barr; P. E. Jones

This is a summary of a bigger report based on the results arrived at the New Orleans data base design workshop. This paper outlines the four major areas of data base design. It discusses the important issues, some of the results which have been achieved and future research problems.


national computer conference | 1976

Generalized software for translating data

Edward W. Birss; James P. Fry

Many data processing installations are confronted with the problem of data conversion. Some of the conversion problems are conversion of files foreign to the installation, conversion of files into a data base management system format, and conversion of all data to upgrade hardware or software. Simple file organizations pose few conversion problems, while logically and physically complex data bases emphasize many conversion problems. The current approach of writing specific translation programs is time consuming and frequently inaccurate; a new approach is desirable. To address these conversion problems, The University of Michigan Data Translation Project has developed a generalized translation methodology. This methodology has been applied in the development of several prototype data translators. These translators have progressively advanced the physical transformation capabilities (reformatting) and the logical transformation capabilities (restructuring). The reformatting capabilities of the translators include the ability to access and modify the physical storage structures which support sequential, indexed sequential, and network organizations. The restructuring capabilities allow complex restructuring of lists, trees, and networks. Future extensions to the translation methodology include the decomposition of the translation process into small, but specific steps. Languages would be developed to address each of these small translations, and could lead to a generalized accessing mechanism and a data interchange form.


international conference on management of data | 1974

Towards a formulation and definition of data reorganization

James P. Fry; David W. Jeris

Data reorganization can be informally defined as the process of changing the logical and/or physical organization of data so that it can be processed more effectively in a new hardware/software environment. Motivation for data reorganization is presented and examples given. The major issues that were taken into consideration in the formulation of a definition of data reorganization are discussed. These include the scope of the definition and issues of loss of information and data independence. A formal definition of data reorganization is presented and discussed through the use of a four level model of data. The spectrum of reorganization, from logical to physical, is discussed, and examples presented. Finally, the complexity of data reorganization operations is discussed.


very large data bases | 1979

Database Program Conversion: A Framework For Research

Robert W. Taylor; James P. Fry; Ben Shneiderman; Diane C. P. Smith; Stanley Y. W. Su

As requirements change, database administrators come under pressure to change the schema which is a description of the database structure. Although writing a new schema is a relatively easy job and transforming the database to match the schema can be accomplished with a modest effort, transforming the numerous programs which operate on the database often requires enormous effort. This interim report describes previous research, defines the problem and proposes a framework for research on the automatic conversion of database programs to match the schema transformations. The approach is based on a precise description of the data structures, integrity constraints, and permissible operations. This work will help designers of manual and computer aided conversion facilities, database administrators who are considering conversions and developers of future database management systems, which will have ease of conversion as a design goal.

Collaboration


Dive into the James P. Fry's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shamkant B. Navathe

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B. K. Bhargava

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge