Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel E. Atkins is active.

Publication


Featured researches published by Daniel E. Atkins.


IEEE Computer | 1996

Toward inquiry-based education through interacting software agents

Daniel E. Atkins; William P. Birmingham; Edmund H. Durfee; Eric J. Glover; Tracy Mullen; Elke A. Rundensteiner; Elliot Soloway; José M. Vidal; Raven Wallace; Michael P. Wellman

The University of Michigan Digital Library (UMDL) project is creating an infrastructure for rendering library services over a digital network. When fully developed, the UMDL will provide a wealth of information sources and library services to students, researchers, and educators. Tasks are distributed among numerous specialized modules called agents. The three classes of agents are user interface agents, mediator agents, and collection interface agents. Complex tasks are accomplished by teams of specialized agents working together-for example, by interleaving various types of search. The UMDL is being deployed in three arenas: secondary-school science classrooms, the University of Michigan library, and space-science laboratories. The development team expects the scale and diversity of the project to test their technical ideas about distributed agents, interoperability, mediation, and economical resource allocation.


IEEE Computer | 1975

Introduction to the Role of Redundancy in Computer Arithmetic

Daniel E. Atkins

Redundancy, the state of being in excess of what is necessary, as applied in the implementation of computer arithmetic is motivated by three design goals: to improve reliability, to increase speed of operation, and/or to provide structural flexibility. In achieving the first goal, improvement of reliability, hardware redundancy and/or redundant arithmetic codes are applied to the detection and correction of faults. Although this is an increasingly vital area it will not be discussed in this paper. Rather, the focus will be on the other two potential benefits: more specifically, on the judicious use of number systems employing redundancy in representation. A positional number system with fixed radix, r, is redundant if the allowable digit set includes more than r distinct elements, thereby affording alternate representations of a given numeric value. Uniqueness. of representation is sacrificed with hope of greater gains. A novel, rigorous treatment of redundant, radix polynomial representation is included in Reference 1.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 1984

A Class of Cellular Architectures to Support Physical Design Automation

Rob A. Rutenbar; Trevor N. Mudge; Daniel E. Atkins

Special-purpose hardware has been proposed as a solution to several increasingly complex problems in design automation. This paper examines a class of cellular architectures called raster pipeline subarrays--RPS architectures--applicable to problems in physical DA that are (1) representable on a cellular grid, and (2) characterized by local functional dependencies among grid cells. Machines with this architecture first evolved in conventional cellular applications that exhibit similarities to grid-based DA problems. To analyze the properties of the RPS organization in context, machines designed for cellular applications are reviewed, and it is shown that many DA machines proposed/constructed for grid-based problems fit naturally into a taxonomy of cellular machines. The implementation of DA algorithms on RPS hardware is partitioned into local issues that involve the processing of individual cell neighborhoods, and global issues that involve strategies for handling complete grids in a pipeline environment. Design rule checking and routing algorithms are examined in an RPS environment with respect to these issues. Experimental measurements for such algorithms running on an existing RPS machine exhibit significant speedups. From these studies are derived the necessary performance characteristics of RPS hardware optimized specifically for grid-based DA. Finally, the practical merits of such an architecture are evaluated.


human factors in computing systems | 1994

The upper atmospheric research collaboratory

Susan E. McDaniel; Gary M. Olson; Terry E. Weymouth; C. E. Rasmussen; Atul Prakash; C. R. Clauer; Daniel E. Atkins; R. Penmetsa; N. H. Manohar; Hyong Sop Shim

A lthough observations of upper atmospheric phenomena are made at all latitudes, because of the characteristics of the earth’s magnetic field, most ground-based instruments are concentrated at high latitudes, particularly in the Arctic. Many of these facilities are in remote areas and are relatively difficult to reach. With the ending of the Cold War in the early 1990s, inexpensive military flights to many of these remote areas ended, making access more difficult and more expensive. Fortunately, these changing circumstances coincided with the emergence of the Internet. It occurred to a number of scientists in the field that network connections to these remote facilities could improve access and have a beneficial effect on the practice of science. Obviously network access to remote facilities would ameliorate the transportation difficulties, but in addition, it would provide greater access to such facilities for scientists and students at all kinds of institutions and would offer great flexibility for the scheduling of observations to coincide with scientifically important events. For instance, spontaneous coordinated scientific campaigns in response to events such as solar flares would be possible. These problems and opportunities fit very nicely the vision of a collaboratory [1]. In 1992, a group of space scientists, computer scientists, and behavioral scientists at the University of Michigan obtained funding from the National Science Foundation to launch the Upper Atmospheric Research Collaboratory (UARC). UARC is a 6-year project to design, develop, deploy, and evaluate a testbed collaboratory. The Upper Atmospheric Research Collaboratory


Computers and Biomedical Research | 1979

Ultra high speed transaxial image reconstruction of the heart, lungs, and circulation via numerical approximation methods and optimized processor architecture.

Barry K. Gilbert; Aloysius Chu; Daniel E. Atkins; Earl E. Swartzlander; Erik L. Ritman

Abstract A high temporal resolution scanning multiaxial tomography unit, the Dynamic Spatial Reconstructor (DSR), presently under development will be capable of recording multiangular X-ray projection data of sufficient axial range to reconstruct a cylindrical volume consisting of up to 240 contiguous 1-mm thick cross sections encompassing the intact thorax. At repetition rates of up to 60 sets of cross sections per second, the DSR will thus record projection data sufficient to reconstruct as many as 14 400 cross-sectional images during each second of operation. Use of this system in a clinical setting will be dependent upon the development of software and hardware techniques for carrying out X-ray reconstructions at the rate of hundreds of cross sections per second. A conceptual design, with several variations, is proposed for a special purpose hardware reconstruction processor capable of completing a single cross section reconstruction within 1 to 2 msec. In addition, it is suggested that the amount of computation required to execute the filtered back-projection algorithm may be decreased significantly by the utilization of approximation equations, formulated as recursions, for the generation of internal constants required by the algorithm. The effects on reconstructed image quality of several different approximation methods are investigated by reconstruction of density projections generated from a mathematically simulated model of the human thorax, assuming the same source-detector geometry and X-ray flux density as will be employed by the DSR. These studies have indicated that the prudent application of numerical approximations for the generation of internal constants will not cause significant degradation in reconstructed image quality and will in fact require substantially less auxiliary memory and computational capacity than required by direct execution of mathematically exact formulations of the reconstruction algorithm.


symposium on computer arithmetic | 1983

A comparison of ALU structures for VLSI technology

Shauchi Ong; Daniel E. Atkins

Although many of the basic techniques of computer arithmetic have been known since the earliest days of electronic computing, there is a continuing need to re-evaluate them in the context of developments in VLSI circuit technology. Furthermore, recent work in complexity of algorithms, particularly the solution of recurrence relations, suggests new candidate structures for generating the carry vector and raises the questions as to their practicality in modern logic design practice.


design automation conference | 1982

Cellular Image Processing Techniques for VLSI Circuit Layout Validation and Routing

Trevor N. Mudge; Rob A. Rutenbar; Daniel E. Atkins; R.M. Lougheed

The architecture of the Cytocomputer?, an existing special-purpose, pipelined cellular image processor, is described. A formalism used to express cellular operations on images is then given. Cellular image processing algorithms are then developed that perform (1) design rule checks (DRCs) on VLSI circuit layouts, and (2) Lee-type wire routing. Two sets of cellular image processing transformations for checking the Mead and Conway design rules and for Lee-routing have been defined and used to program the Cytocomputer. Some experimental results are shown for these cellular implementations.


ACM Sigmicro Newsletter | 1983

Tree compaction of microprograms

Jehkwan Lah; Daniel E. Atkins

Although Fishers trace scheduling procedure for global compaction may produce significant reduction in execution time of compacted microcode, the growth of memory size by extensive copying of blocks can be enormous. In the worst case, the memory size can grow exponentially [FIS81a] and the complex bookkeeping stage of the trace scheduling is an obstacle to implementation.A technique called tree compaction, which is based on the trace scheduling, is proposed to mitigate these drawbacks. Basically, it partitions a given set of microprogram blocks into tree-shaped subsets and applies the idea of trace scheduling on each tree-shaped subset separately. It achieves almost all of the compaction of the Fishers trace scheduling procedure except that which causes copying of blocks. Preliminary tests indicate that tree compaction gives almost as short execution time as trace scheduling but with much less memory. The paper includes such an example.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 1988

Systolic routing hardware: performance evaluation and optimization

Rob A. Rutenbar; Daniel E. Atkins

The performance of maze-routing algorithms mapped onto linear systolic array hardware is examined. Cell expansions in the wavefront-expansion phase of maze routing are performed in parallel in each processing stage of the hardware as the routing grid streams through the processor array. The authors concentrate on optimizing the performance of single-net routing problems with respect to a given systolic hardware configuration. A heuristic called constant-increment framing is introduced as a simple method for scheduling all the required wavefront expansion steps on a pipeline of processors. One-layer and two-layer routers using this heuristic have been implemented on a prototype systolic processor. Experimental and theoretical comparisons suggest that the constant-increment heuristic exhibits performance within a factor of two of optimal over a range of hardware configurations, and is substantially easier to compute than the optimal solution. >


Eos, Transactions American Geophysical Union | 1994

New project to support scientific collaboration electronically

C. R. Clauer; C. E. Rasmussen; Rick Niciejewski; T. L. Killeen; J. D. Kelly; Y. Zambre; T. J. Rosenberg; Peter Stauning; E. Friis-Christensen; S. B. Mende; Terry E. Weymouth; Atul Prakash; S. E. McDaniel; Gary M. Olson; Thomas A. Finholt; Daniel E. Atkins

A new multidisciplinary effort is linking research in the upper atmospheric and space, computer, and behavioral sciences to develop a prototype electronic environment for conducting team science worldwide. A real-world electronic collaboration testbed has been established to support scientific work centered around the experimental operations being conducted with instruments from the Sondrestrom Upper Atmospheric Research Facility in Kangerlussuaq, Greenland. Such group computing environments will become an important component of the National Information Infrastructure initiative, which is envisioned as the high-performance communications infrastructure to support national scientific research.

Collaboration


Dive into the Daniel E. Atkins's collaboration.

Top Co-Authors

Avatar

Gary M. Olson

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

T. L. Killeen

National Center for Atmospheric Research

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge