Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William Plishker is active.

Publication


Featured researches published by William Plishker.


rapid system prototyping | 2008

Functional DIF for Rapid Prototyping

William Plishker; Nimish Sane; Mary Kiemb; Kapil Anand; Shuvra S. Bhattacharyya

Dataflow formalisms have provided designers of digital signal processing systems with optimizations and guarantees to arrive at quality prototypes quickly. As system complexity increases, designers are expressing more types of behavior in dataflow languages to retain these implementation benefits. While the semantic range of DSP-oriented dataflow models has expanded to cover quasi-static and dynamic applications, efficient functional simulation of such applications has not. Complexity in scheduling and modeling has impeded efforts towards functional simulation that matches the final implementation. We provide this functionality by introducing a new dataflow model of computation, called enable-invoke dataflow (EIDF), that supports flexible and efficient prototyping of dataflow-based application representations. EIDF permits the natural description of actors for dynamic and static dataflow models. We integrate EIDF into the dataflow interchange format (DIF) package and demonstrate the approach on the design of a polynomial evaluation accelerator targeting an FPGA implementation. Our experiments show that a design environment based on EIDF can achieve functionally-correct simulation compared to Verilog, allowing the application designer to arrive at a verified functional simulation faster, and therefore at a functional prototype much more quickly than traditional design practices.


Medical Physics | 2007

Automatic segmentation of phase-correlated CT scans through nonrigid image registration using geometrically regularized free-form deformation.

Raj Shekhar; Peng Lei; Carlos R. Castro-Pareja; William Plishker; W D'Souza

Conventional radiotherapy is planned using free-breathing computed tomography (CT), ignoring the motion and deformation of the anatomy from respiration. New breath-hold-synchronized, gated, and four-dimensional (4D) CT acquisition strategies are enabling radiotherapy planning utilizing a set of CT scans belonging to different phases of the breathing cycle. Such 4D treatment planning relies on the availability of tumor and organ contours in all phases. The current practice of manual segmentation is impractical for 4D CT, because it is time consuming and tedious. A viable solution is registration-based segmentation, through which contours provided by an expert for a particular phase are propagated to all other phases while accounting for phase-to-phase motion and anatomical deformation. Deformable image registration is central to this task, and a free-form deformation-based nonrigid image registration algorithm will be presented. Compared with the original algorithm, this version uses novel, computationally simpler geometric constraints to preserve the topology of the dense control-point grid used to represent free-form deformation and prevent tissue fold-over. Using mean squared difference as an image similarity criterion, the inhale phase is registered to the exhale phase of lung CT scans of five patients and of characteristically low-contrast abdominal CT scans of four patients. In addition, using expert contours for the inhale phase, the corresponding contours were automatically generated for the exhale phase. The accuracy of the segmentation (and hence deformable image registration) was judged by comparing automatically segmented contours with expert contours traced directly in the exhale phase scan using three metrics: volume overlap index, root mean square distance, and Hausdorff distance. The accuracy of the segmentation (in terms of radial distance mismatch) was approximately 2 mm in the thorax and 3 mm in the abdomen, which compares favorably to the accuracies reported elsewhere. Unlike most prior work, segmentation of the tumor is also presented. The clinical implementation of 4D treatment planning is critically dependent on automatic segmentation, for which is offered one of the most accurate algorithms yet presented.


design, automation, and test in europe | 2009

A generalized scheduling approach for dynamic dataflow applications

William Plishker; Nimish Sane; Shuvra S. Bhattacharyya

For a number of years, dataflow concepts have provided designers of digital signal processing systems with environments capable of expressing high-level software architectures as well as low-level, performance-oriented kernels. But analysis of system-level trade-offs has been inhibited by the diversity of models and the dynamic nature of modern dataflow applications. To facilitate design space exploration for software implementations of heterogeneous dataflow applications, developers need tools capable of deeply analyzing and optimizing the application. To this end, we present a new scheduling approach that leverages a recently proposed general model of dynamic dataflow called core functional dataflow (CFDF). CFDF supports high-level application descriptions with multiple models of dataflow by structuring actors with sets of modes that represent fixed behaviors. In this work we show that by decomposing a dynamic dataflow graph as directed by its modes, we can derive a set of static dataflow graphs that interact dynamically. This enables designers to readily experiment with existing dataflow model specific scheduling techniques to all or some parts of the application while applying custom schedulers to others. We demonstrate this generalized dataflow scheduling method on dynamic mixed-model applications and show that run-time and buffer sizes significantly improve compared to a baseline dynamic dataflow scheduler and simulator.


IEEE Transactions on Circuits and Systems for Video Technology | 2009

Exploring the Concurrency of an MPEG RVC Decoder Based on Dataflow Program Analysis

Ruirui Gu; Jorn W. Janneck; Shuvra S. Bhattacharyya; Mickaël Raulet; Matthieu Wipliez; William Plishker

This paper presents an in-depth case study on dataflow-based analysis and exploitation of parallelism in the design and implementation of a MPEG reconfigurable video coding decoder. Dataflow descriptions have been used in a wide range of digital signal processing (DSP) applications, such as applications for multimedia processing and wireless communications. Because dataflow models are effective in exposing concurrency and other important forms of high level application structure, dataflow techniques are promising for implementing complex DSP applications on multicore systems, and other kinds of parallel processing platforms. In this paper, we use the client access license (CAL) language as a concrete framework for representing and demonstrating dataflow design techniques. Furthermore, we also describe our application of the differential item functioning dataflow interchange format package (TDP), a software tool for analyzing dataflow networks, to the systematic exploitation of concurrency in CAL networks that are targeted to multicore platforms. Using TDP, one is able to automatically process regions that are extracted from the original network, and exhibit properties similar to synchronous dataflow (SDF) models. This is important in our context because powerful techniques, based on static scheduling, are available for exploiting concurrency in SDF descriptions. Detection of SDF-like regions is an important step for applying static scheduling techniques within a dynamic dataflow framework. Furthermore, segmenting a system into SDF-like regions also allows us to explore cross-actor concurrency that results from dynamic dependences among different regions. Using SDF-like region detection as a preprocessing step to software synthesis generally provides an efficient way for mapping tasks to multicore systems, and improves the system performance of video processing applications on multicore platforms.


high performance embedded architectures and compilers | 2008

Heterogeneous design in functional DIF

William Plishker; Nimish Sane; Mary Kiemb; Shuvra S. Bhattacharyya

Dataflow formalisms have provided designers of digital signal processing systems with analysis and optimizations for many years. As system complexity increases, designers are relying on more types of dataflow models to describe applications while retaining these implementation benefits. The semantic range of DSP-oriented dataflow models has expanded to cover heterogeneous models and dynamic applications, but efficient design, simulation, and scheduling of such applications has not. To facilitate implementing heterogeneous applications, we utilize a new dataflow model of computation and show how actors designed in other dataflow models are directly supported by this framework, allowing system designers to immediately compose and simulate actors from different models. Using an example, we show how this approach can be applied to quickly describe and functionally simulate a heterogeneous dataflow-based application such that a designer may analyze and tune trade-offs among different models and schedules for simulation time, memory consumption, and schedule size.


rapid system prototyping | 2011

Applying graphics processor acceleration in a software defined radio prototyping environment

William Plishker; George F. Zaki; Shuvra S. Bhattacharyya; Charles Clancy; John Kuykendall

With higher bandwidth requirements and more complex protocols, software defined radio (SDR) has ever growing computational demands. SDR applications have different levels of parallelism that can be exploited on multicore platforms, but design and programming difficulties have inhibited the adoption of specialized multicore platforms like graphics processors (GPUs). In this work we propose a new design flow that augments a popular existing SDR development environment (GNU Radio), with a dataflow foundation and a stand-alone GPU accelerated library. The approach gives an SDR developer the ability to prototype a GPU accelerated application and explore its design space fast and effectively. We demonstrate this design flow on a standard SDR benchmark and show that deciding how to utilize a GPU can be non-trivial for even relatively simple applications.


real-time systems symposium | 2007

An Energy-Driven Design Methodology for Distributing DSP Applications across Wireless Sensor Networks

Chung-Ching Shen; William Plishker; Shuvra S. Bhattacharyya; Neil Goldsman

As part of the DECOS architecture, this paper presents a generic framework for gateways, which enable message exchanges across application subsystem boundaries in order to exploit redundancy and to coordinate the behavior of application subsystems. In the DECOS architecture, networks of different application subsystem can exhibit property mis matches, such as different protocols (e.g., CAN protocol vs. time-triggered communication), divergent syntax, and incoherent naming. Gateways provide a systematic solution for resolving these property mismatches. Within a gateway, a real-time database separates the application subsystems and stores temporally accurate real-time images. Controlled by a formal gateway specification based on an extension of timed automata, a gateway acquires messages from one gateway side to update these real-time images and recombines the real-time images into messages for the other gateway sides. In a prototype implementation, development tools use such a formal gateway specification expressed as a UML model as an input and automatically generate a configuration module for the parameterization of a generic architectural gateway service to a specific application.Wireless sensor network (WSN) applications have been studied extensively in recent years. Such applications involve resource-limited embedded sensor nodes that have small size and low power requirements. Based on the need for extended network lifetimes in WSNs in terms of energy use, the energy efficiency of computation and communication operations in the embedded sensor nodes becomes critical. Digital signal processing (DSP) applications typically require intensive data processing operations. They are difficult to apply directly in resource-limited WSNs because their operational complexity can strongly influence the network lifetime. In this paper, we present a design methodology for modeling and implementing DSP applications applied to wireless sensor networks. This methodology explores efficient modeling techniques for DSP applications, including acoustic sensing and data processing; derives formulations of energy-driven partitioning for distributing such applications across wireless sensor networks; and develops efficient heuristic algorithms for finding partitioning results that maximize the network lifetime. A case study involving a speech recognition system demonstrates the capabilities of our proposed methodology.


international conference on multimedia and expo | 2011

A design tool for efficient mapping of multimedia applications onto heterogeneous platforms

Chung-Ching Shen; Hsiang-Huang Wu; Nimish Sane; William Plishker; Shuvra S. Bhattacharyya

Development of multimedia systems on heterogeneous platforms is a challenging task with existing design tools due to a lack of rigorous integration between high level abstract modeling, and low level synthesis and analysis. In this paper, we present a new dataflow-based design tool, called the targeted dataflow interchange format (TDIF), for design, analysis, and implementation of embedded software for multimedia systems. Our approach provides novel capabilities, based on the principles of task-level dataflow analysis, for exploring and optimizing interactions across application behavior; operational context; heterogeneous platforms, including high performance embedded processing architectures; and implementation constraints.


ACM Transactions on Sensor Networks | 2010

Energy-driven distribution of signal processing applications across wireless sensor networks

Chung-Ching Shen; William Plishker; Dong-Ik Ko; Shuvra S. Bhattacharyya; Neil Goldsman

Wireless sensor network (WSN) applications have been studied extensively in recent years. Such applications involve resource-limited embedded sensor nodes that have small size and low power requirements. Based on the need for extended network lifetimes in WSNs in terms of energy use, the energy efficiency of computation and communication operations in the sensor nodes becomes critical. Digital Signal Processing (DSP) applications typically require intensive data processing operations and as a result are difficult to implement directly in resource-limited WSNs. In this article, we present a novel design methodology for modeling and implementing computationally intensive DSP applications applied to wireless sensor networks. This methodology explores efficient modeling techniques for DSP applications, including data sensing and processing; derives formulations of Energy-Driven Partitioning (EDP) for distributing such applications across wireless sensor networks; and develops efficient heuristic algorithms for finding partitioning results that maximize the network lifetime. To address such an energy-driven partitioning problem, this article provides a new way of aggregating data and reducing communication traffic among nodes based on application analysis. By considering low data token delivery points and the distribution of computation in the application, our approach finds energy-efficient trade-offs between data communication and computation.


ieee international symposium on parallel & distributed processing, workshops and phd forum | 2011

A Model-Based Schedule Representation for Heterogeneous Mapping of Dataflow Graphs

Hsiang-Huang Wu; Chung-Ching Shen; Nimish Sane; William Plishker; Shuvra S. Bhattacharyya

Dataflow-based application specifications are widely used in model-based design methodologies for signal processing systems. In this paper, we develop a new model called the dataflow schedule graph (DSG) for representing a broad class of dataflow graph schedules. The DSG provides a graphical representation of schedules based on dataflow semantics. In conventional approaches, applications are represented using dataflow graphs, whereas schedules for the graphs are represented using specialized notations, such as various kinds of sequences or looping constructs. In contrast, the DSG approach employs dataflow graphs for representing both application models and schedules that are derived from them. Our DSG approach provides a precise, formal framework for unambiguously representing, analyzing, manipulating, and interchanging schedules. We develop detailed formulations of the DSG representation, and present examples and experimental results that demonstrate the utility of DSGs in the context of heterogeneous signal processing system design.

Collaboration


Dive into the William Plishker's collaboration.

Top Co-Authors

Avatar

Raj Shekhar

Children's National Medical Center

View shared research outputs
Top Co-Authors

Avatar

John Wong

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Harry Quon

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Junghoon Lee

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Seyoun Park

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

T.R. McNutt

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xinyang Liu

Children's National Medical Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge