Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Bond is active.

Publication


Featured researches published by Robert Bond.


international conference on acoustics, speech, and signal processing | 2012

Dynamic distributed dimensional data model (D4M) database and computation system

Jeremy Kepner; William Bergeron; Nadya T. Bliss; Robert Bond; Chansup Byun; Gary R. Condon; Kenneth L. Gregson; Matthew Hubbell; Jonathan Kurz; Andrew McCabe; Peter Michaleas; Andrew Prout; Albert Reuther; Antonio Rosa; Charles Yee

A crucial element of large web companies is their ability to collect and analyze massive amounts of data. Tuple store databases are a key enabling technology employed by many of these companies (e.g., Google Big Table and Amazon Dynamo). Tuple stores are highly scalable and run on commodity clusters, but lack interfaces to support efficient development of mathematically based analytics. D4M (Dynamic Distributed Dimensional Data Model) has been developed to provide a mathematically rich interface to tuple stores (and structured query language “SQL” databases). D4M allows linear algebra to be readily applied to databases. Using D4M, it is possible to create composable analytics with significantly less effort than using traditional approaches. This work describes the D4M technology and its application and performance.


IEEE Signal Processing Magazine | 2009

Multicore software technologies

Hahn Kim; Robert Bond

Multicore architectures require parallel computation and explicit management of the memory hierarchy, both of which add programming complexity and are unfamiliar to most programmers. While MPI and OpenMP still have a place in the multicore world, the learning curves are simply too steep for most programmers. New technologies are needed to make multicore processors accessible to a larger community. The signal and image processing community stands to benefit immensely from such technologies. This article provides a survey of new software technologies that hide the complexity of multicore architectures, allowing programmers to focus on algorithms instead of architectures.


hpcmp users group conference | 2005

pMapper: Automatic Mapping of Parallel Matlab Programs

Nadya Travinin; Henry Hoffmann; Robert Bond; Hector Chan; Jeremy Kepner; Edmund Wong

Algorithm implementation efficiency is key to delivering high-performance computing capabilities to demanding, high throughput DoD signal and image processing applications and simulations. Significant progress has been made in compiler optimization of serial programs, but many applications require parallel processing, which brings with it the difficult task of determining efficient mappings of algorithms to multiprocessor computers. The pMapper infrastructure addresses the problem of performance optimization of multistage MATLAB applications on parallel architectures. pMapper is an automatic performance tuning library written as a layer on top of pMatlab. pMatlab is a parallel Matlab toolbox that provides MATLAB users with global array semantics. While pMatlab abstracts the message-passing interface, the responsibility of generating maps for numerical arrays still falls on the user. A processor map for a numerical array is defined as an assignment of blocks of data to processing elements. Choosing the best mapping for a set of numerical arrays in a program is a nontrivial task that requires significant knowledge of programming languages, parallel computing, and processor architecture. pMapper automates the task of map generation, increasing the ease of programming and productivity. In addition to automating the mapping of parallel Matlab programs, pMapper could be used as a mapping tool for embedded systems. This paper addresses the design details of the pMapper infrastructure and presents preliminary results.


conference on scientific computing | 1997

A Portable, Object-Based Parallel Library and Layered Framework for Real-Time Radar Signal Processing

Cecelia DeLuca; Curtis W. Heisey; Robert Bond; Jim Daly

We have developed an object-based, layered framework and associated library in C for real-time radar applications. Object classes allow us to reuse code modules, and a layered framework enhances the portability of applications. The framework is divided into a machine-dependent kernel layer, a mathematical library layer, and an application layer. We meet performance requirements by highly optimizing the kernel layer, and by performing allocations and preparations for data transfers during a set-up time. Our initial application employs a space-time adaptive processing (STAP) algorithm and requires throughput on the order of 20 Gflop/s (sustained), with 1 s latency. We present performance results for a key portion of the STAP algorithm and discuss future work.


Computing in Science and Engineering | 2009

High-Productivity Software Development with pMatlab

Julie Mullen; Nadya T. Bliss; Robert Bond; Jeremy Kepner; Hahn Kim; Albert Reuther

In this paper, we explore the ease of tackling a communication-intensive parallel computing task - namely, the 2D fast Fourier transform (FFT). We start with a simple serial Matlab code, explore in detail a ID parallel FFT, and illustrate how it can be extended to multidimensional FFTs.


optical fiber communication conference | 2011

Photonics for HPEC: A low-powered solution for high bandwidth applications

Eric Robinson; Gilbert Hendry; Vitaliy Gleyzer; Johnnie Chan; Luca P. Carloni; Nadya T. Bliss; Robert Bond; Keren Bergman

Photonics offer high bandwidth for minimal power. While critical for the future of HPC, this has an immediate impact in HPEC, where power is critical. Here, a 4–10x improvement in performance/watt can be demonstrated.


ieee radar conference | 2008

Embedded Digital Signal Processing for Radar Applications

David R. Martinez; Robert Bond; Michael Vai

In the last ten years, there has been significant emphasis on advancing sensor systems with active electronically scanned antennas (AESAs). The recent advances in computing technologies make it affordable to exploit the flexibility of AESAs using very high performance embedded computers for signal and image processing. This tutorial presents an overview of applications demanding real-time embedded computing, an introduction to hardware and software implementation techniques, recent advances in hardware and software standards to achieve rapid technology insertion, and a look into observed embedded computing trends.


Archive | 2008

Radar Signal Processing: An Example of High Performance Embedded Computing

Albert Reuther; Robert Bond

■ This article recounts the development of radar signal processing at Lincoln Laboratory. The Laboratory’s significant efforts in this field were initially driven by the need to provide detected and processed signals for air and ballistic missile defense systems. The first processing work was on the Semi-Automatic Ground Environment (SAGE) air-defense system, which led to algorithms and techniques for detection of aircraft in the presence of clutter. This work was quickly followed by processing efforts in ballistic missile defense, first in surface-acoustic-wave technology, in concurrence with the initiation of radar measurements at the Kwajalein Missile Range, and then by exploitation of the newly evolving technology of digital signal processing, which led to important contributions for ballistic missile defense and Federal Aviation Administration applications. More recently, the Laboratory has pursued the computationally challenging application of adaptive processing for the suppression of jamming and clutter signals. This article discusses several important programs in these areas.


international conference on cluster computing | 2005

Automatic Parallelization with pMapper

Nadya Travinin; Henry Hoffmann; Robert Bond; Hector Chan; Jeremy Kepner; Edmund Wong

Algorithm implementation efficiency is key to delivering high-performance computing capabilities to demanding, high throughput signal and image processing applications and simulations. Significant progress has been made in optimization of serial programs, but many applications require parallel processing, which brings with it the difficult task of determining efficient mappings of algorithms. The pMapper infrastructure addresses the problem of performance optimization of multistage MATLABreg applications on parallel architectures. pMapper is an automatic performance tuning library written as a layer on top of pMatlab: Parallel Matlab Toolbox. While pMatlab abstracts the message-passing interface, the responsibility of mapping numerical arrays falls on the user. Choosing the best mapping for a set of numerical arrays is a nontrivial task that requires significant knowledge of programming languages, parallel computing, and processor architecture. pMapper automates the task of map generation. This abstract addresses the design details of pMapper and presents preliminary results


asilomar conference on signals, systems and computers | 2002

Discrete optimization using decision-directed learning for distributed networked computing

Joel Goodman; Albert Reuther; Robert Bond; Hector Chan; Harold M. Heggestad; Mike Seibert

Decision-directed learning (DDL) is an iterative discrete approach to finding a feasible solution for large-scale combinatorial optimization problems. DDL is capable of efficiently formulating a solution to network scheduling problems that involve load limiting device utilization, selecting parallel configurations for software applications and host hardware using a minimum set of resources, and meeting time-to-result performance requirements in a dynamic network environment. The paper quantifies the algorithms that constitute DDL and compares its performance to other popular combinatorial optimization techniques. This is done within the context of self-directed real-time networked resource configuration for dynamically building a mission specific signal-processor for real-time distributed and parallel applications.

Collaboration


Dive into the Robert Bond's collaboration.

Top Co-Authors

Avatar

Jeremy Kepner

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Albert Reuther

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nadya T. Bliss

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David R. Martinez

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hahn Kim

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nadya Travinin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hector Chan

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Edmund Wong

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eric Robinson

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge