Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ben Abbott is active.

Publication


Featured researches published by Ben Abbott.


IEEE Software | 1993

Model-based software synthesis

Ben Abbott; Ted Bapty; Csaba Biegl; Gabor Karsai

The knowledge-representation and compilation techniques used in a model-based, automatic software synthesis environment are discussed. The environment was used to build Caddmus, a system with more than 250 cooperating processes. The real-time execution environment automatically generates a macro-dataflow computation from declarative models. Central to the approach is the Multigraph Architecture, which provides the framework for model-based synthesis in real-time, parallel-computing environments. Application of Caddmus to analysis of all data related to testing new and redesigned turbine engines is described.<<ETX>>


ASME 1994 International Gas Turbine and Aeroengine Congress and Exposition | 1994

CADDMAS: A Real-Time Parallel System for Dynamic Data Analysis

Thomas F. Tibbals; Theodore A. Bapty; Ben Abbott

Arnold Engineering Development Center (AEDC) has designed and built a high-speed data acquisition and processing system for real-time online dynamic data monitoring and analysis. The Computer Assisted Dynamic Data Monitoring and Analysis System (CADDMAS) provides 24 channels at high frequency and another 24 channels at low frequency for online real-time aeromechanical, vibration, and performance analysis of advanced turbo-engines and other systems. The system is primarily built around two different parallel processors and several PCs to demonstrate hardware independence and architecture scalability. These processors provide the computational power to display online and in real-time what can take from days to weeks using existing offline techniques. The CADDMAS provides online test direction and immediate hardcopy plots for critical parameters, all the while providing continuous health monitoring through parameter limit checking. Special in-house developed Front End Processors (FEP) sample the dynamic signals, perform anti-aliasing, signal transfer function correction, and bandlimit filtering to improve the accuracy of the time domain signal. A second in-house developed Numeric Processing Element (NPE) performs the FFT, threshold monitoring, and packetizes the data for rapid asynchronous access by the parallel network. Finally, the data are then formatted for display, hardcopy plotting, and cross-channel processing within the parallel network utilizing off-the-shelf hardware. The parallel network is a heterogeneous message-passing parallel pipeline configuration which permits easy scaling of the system. Advanced parallel processing scheduler/controller software has been adapted specifically for CADDMAS to allow quasi-dynamic instantiation of a variety of simultaneous data processing tasks concurrent with display and alarm monitoring functions without gapping the data. Although many applications of CADDMAS exist, this paper describes the features of CADDMAS, the development approach, and the application of CADDMAS for turbine engine aeromechanical testing.Copyright


ieee international conference on high performance computing data and analytics | 1994

Parallel architectures with flexible topology

Ákos Lédeczi; Ben Abbott

Parallel computer architectures with flexible topology are ideal for compute intensive signal processing and instrumentation applications because they provide supercomputing performance and high I/O bandwidth. These systems can be implemented at low cost using processors like the transputer, the TMS320C40, or the ADSP-21060. However, due to difficulties with hardware and software complexity management, the number of successful large-scale applications is limited. Modeling the system graphically from multiple aspects-hardware and software-and automatically processing the models helps manage the complexity of these systems. The presented approach supports automatic topology verification, network loader configuration, process assignment, and deadlock-free message routing.<<ETX>>


southeastern symposium on system theory | 1990

TOPS-a distributed operating system kernel for transputer systems

Hubertus Franke; Ben Abbott

The authors describe a programming environment (TOPS) that creates an abstraction level such that a network of transputers is regarded as a virtual machine independent of the network topology. TOPS provides an extended process model and higher level of message passing than is provided by a bare transputer. The development of TOPS arose in the context of various projects requiring a higher level of flexibility. The discussion covers the transputer system, functionality of TOPS, the task model in TOPS, event handling, dynamic memory measurement, input-output in TOPS, and an evaluation of TOPS.<<ETX>>


engineering of computer based systems | 2003

Model-integrated design toolset for polymorphous computer-based systems

Brandon Eames; Ted Bapty; Ben Abbott; Sandeep Neema; Kumar Gaurav Chhokra

Polymorphous computer-based systems are systems in which the CPU architecture morphs or changes shape to meet the requirements of the application. Optimized and efficient design for these systems requires exploration along axes beyond those of traditional system design. In this paper we outline a model-integrated toolset to aid in the specification, analysis and synthesis of polymorphous applications. Polymorphous systems can be developed utilizing a four-tiered approach, where inherent application properties and characteristics govern design practices at each level. We show through the development of the model-integrated approach that polymorphous system design is inherently coupled with the search and exploration of a combinatorial space of design tradeoffs. Design tools are needed to efficiently evaluate this large and complex space in order to arrive at near-optimal application implementations.


southeastern symposium on system theory | 1990

Performance optimization in signal processing systems

A. Misra; Ben Abbott

Techniques for optimizing the utilization of underlying computer resources with respect to a dynamic signal-processing system executing on them are discussed. An example system is used to illustrate these techniques: a structurally adaptive solution to the sonar problem, the direction-of-arrival finding problem, was implemented under the Multigraph architecture. The signal processing system runs in a distributed, parallel environment, the Multigraph execution environment. Above the executing real-time signal-processing system, a controller guides and coordinates overall goals (e.g. tracking). It also manages the available system resources, taking into account the memory and time requirements of the signal-processing algorithms and their priorities. Further, a user interface allows priorities and various operating parameters of the system to be changed dynamically.<<ETX>>


engineering of computer-based systems | 2004

WASP: a radio geolocation system on highly resource constrained mobile platforms

Kumar Gaurav Chhokra; Theodore A. Bapty; Jason Scott; S. Winberg; D. van Rheeden; Ben Abbott

In recent years, there has been an increased need for surveillance capabilities in both civilian and military arenas. Mobile unmanned sensor fleets have long been envisioned as a tool for acquiring such intelligence. We describe a geolocation system as a payload for unmanned aerial vehicles. In particular, we address the issues that arise in designing an inexpensive, distributed, mobile, RF sensor processing system: low payload cost; power-limited components; real-time constraints; absence of common clocks; and varying environmental conditions. We also target, qualitatively, the tradeoff decisions across the dimensions of accuracy (of time of arrival estimates and the final geolocation), computational effort, physical dimensions, and operational logistics. The final system architecture, algorithms, and accuracy figures are also described.


Microprocessors and Microsystems | 1993

Parallel DSP system integration

Ákos Lédeczi; Ben Abbott; Csaba Biegl; Ted Bapty; Gabor Karsai

Abstract Heterogeneous parallel computer architectures with flexible topology, such as transputer or TMS320C40 networks with special purpose coprocessors, are ideal hardware candidates for many signal processing and instrumentation applications. The management of the hardware complexity of large systems, however, presents serious problems. This paper describes a model-based approach to deal with them. The novel features of this method include hierarchical modelling, dual (graphical and declarative) model representation and automatic model analysis. The merits of the approach include the formal representation of the system design, intelligent hardware diagnostic support and fast message routing map and network loader information generation.


9th Computing in Aerospace Conference | 1993

Model-based software synthesis for large systems

Ben Abbott; Ted Bapty; Csaba Biegl; Ákos Lédeczi; Janos Sztipanovits

In this paper, we describe techniques for knowledge representation and compilation of large software systems in a model-based, automatic program synthesis environment. Domain specific declarative models are used t o represent specifications and implementation strategies for reactive systems. Dynamic re-synthesis of an executing system is supported, allowing the system structure t o adapt to the external or internal environment. We describe an application of these techniques t o a large, high performance parallel instrumentation system used for analysis of turbine engine strain gauge signals produced during altitude testing. The unique features of this approach include: explicit domain-specific declarative models; graphical representation of models; multiple aspect models; automatic specification of the necessary hardware architecture; and on-line re-synthesis of dynamic systems.


southeastern symposium on system theory | 1990

Graphical programming for the transputer

Ben Abbott; Csaba Biegl; Richard Souder; Ted Bapty; Jmos Sztipanovits

The Multigraph programming environment provides a very-high-level programmer interface for the development of parallel and real-time processing systems. It is specifically targeted for large systems wishing to integrate a knowledge-based synthesis technique with standard numerical techniques. The result is a graphical editing environment where the user models the structure of the desired computation. Subsequently, symbolic techniques are used to translate this model to a large-grain data-flow graph. A description is given of the concepts and use the Multigraph programming environment on a tightly coupled parallel processing platform, the INMOS transputer.<<ETX>>

Collaboration


Dive into the Ben Abbott's collaboration.

Top Co-Authors

Avatar

Ted Bapty

Vanderbilt University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Janos Sztipanovits

University of Alabama at Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gregory C. Willden

Southwest Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeremy C. Price

Southwest Research Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge