Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ed Komp is active.

Publication


Featured researches published by Ed Komp.


IEEE Micro | 2004

Programming models for hybrid FPGA-cpu computational components: a missing link

David L. Andrews; Douglas Niehaus; Razali Jidin; Michael Finley; Wesley Peck; Michael Frisbie; Jorge L. Ortiz; Ed Komp; Peter J. Ashenden

Emerging hybrid chips containing cpu and FPGA components are an exciting new development promising commercial off-the-shelf economies of scale, while also supporting hardware customization.


IEEE Transactions on Very Large Scale Integration Systems | 2008

Achieving Programming Model Abstractions for Reconfigurable Computing

David L. Andrews; Ron Sass; Erik K. Anderson; Jason Agron; Wesley Peck; Jim Stevens; Fabrice Baijot; Ed Komp

This paper introduces hthreads, a unifying programming model for specifying application threads running within a hybrid computer processing unit (CPU)/field-programmable gate-array (FPGA) system. Presently accepted hybrid CPU/FPGA computational models-and access to these computational models via high level languages-focus on programming language extensions to increase accessibility and portability. However, this paper argues that new high-level programming models built on common software abstractions better address these goals. The hthreads system, in general, is unique within the reconfigurable computing community as it includes operating system and middleware layer abstractions that extend across the CPU/FPGA boundary. This enables all platform components to be abstracted into a unified multiprocessor architecture platform. Application programmers can then express their computations using threads specified from a single POSIX threads (pthreads) multithreaded application program and can then compile the threads to either run on the CPU or synthesize them to run within an FPGA. To enable this seamless framework, we have created the hardware thread interface (HWTI) component to provide an abstract, platform-independent compilation target for hardware-resident computations. The HWTI enables the use of standard thread communication and synchronization operations across the software/hardware boundary. Key operating system primitives have been mapped into hardware to provide threads running in both hardware and software uniform access to a set of sub-microsecond, minimal-jitter services. Migrating the operating system into hardware removes the potential bottleneck of routing all system service requests through a central CPU.


field-programmable custom computing machines | 2006

Enabling a Uniform Programming Model Across the Software/Hardware Boundary

Erik K. Anderson; Jason Agron; Wesley Peck; Jim Stevens; Fabrice Baijot; Ed Komp; Ron Sass; David L. Andrews

In this paper, we present hthreads, a unifying programming model for specifying application threads running within a hybrid CPU/FPGA system. Threads are specified from a single pthreads multithreaded application program and compiled to run on the CPU or synthesized to run on the FPGA. The hthreads system, in general, is unique within the reconfigurable computing community as it abstracts the CPU/FPGA components into a unified custom threaded multiprocessor architecture platform. To support the abstraction of the CPU/FPGA component boundary, we have created the hardware thread interface (HWTI) component that frees the designer from having to specify and embed platform specific instructions to form customized hardware/software interactions. Instead, the hardware thread interface supports the generalized pthreads API semantics, and allows passing of abstract data types between hardware and software threads. Thus the hardware thread interface provides an abstract, platform independent compilation target that enables thread and instruction-level parallelism across the software/hardware boundary


implementation and application of functional languages | 2009

Introducing Kansas lava

Andy Gill; Tristan Bull; Garrin Kimmell; Erik Perrins; Ed Komp; Brett Werling

Kansas Lava is a domain specific language for hardware description. Though there have been a number of previous implementations of Lava, we have found the design space rich, with unexplored choices. We use a direct (Chalmers style) specification of circuits, and make significant use of Haskell overloading of standard classes, leading to concise circuit descriptions. Kansas Lava supports both simulation (inside GHCi), and execution via VHDL, by having a dual shallow and deep embedding inside our Signal type. We also have a lightweight sized-type mechanism, allowing for MATLAB style matrix based specifications to be directly expressed in Kansas Lava.


emerging technologies and factory automation | 2005

hthreads: a hardware/software co-designed multithreaded RTOS kernel

David L. Andrews; Wesley Peck; Jason Agron; K. Preston; Ed Komp; Michael Finley; Ron Sass

This paper describes the hardware/software co-design of a multithreaded RTOS kernel on a new Xilinx Virtex II Pro FPGA. Our multithreaded RTOS kernel is an integral part of our hybrid thread programming model being developed for hybrid systems which are comprised of both software resident and hardware resident concurrently executing threads. Additionally, we provide new interrupt semantics by migrating uncontrollable asynchronous interrupt invocations into controllable, priority based thread scheduling requests. Performance tests verify our hardware/software codesign approach provides significantly tighter bounds on scheduling precision and significant jitter reduction when compared to traditionally implemented RTOS kernels. It also eliminates the jitter associated with asynchronous interrupt invocations


real-time systems symposium | 2006

Run-Time Services for Hybrid CPU/FPGA Systems on Chip

Jason Agron; Wesley Peck; Erik K. Anderson; David L. Andrews; Ed Komp; Ron Sass; Fabrice Baijot; Jim Stevens

Modern FPGA devices, which include (multiple) processor core(s) as diffused IP on the silicon die, provide an excellent platform for developing custom multiprocessor systems-on-programmable chip (MPSoPC) architectures. As researchers are investigating new methods for migrating portions of applications into custom hardware circuits, it is also critical to develop new run-time service frameworks to support these capabilities. Hthreads (HybridThreads) is a multithreaded RTOS kernel for hybrid FPGA/CPU systems designed to meet this new growing need. A key capability of hthreads is the migration of thread management, synchronization primitives, and run-time scheduling services for both hardware and software threads into hardware. This paper describes the hthreads scheduler, a key component for controlling both software-resident threads (SW threads) and threads implemented in programmable logic (HW threads). Run-time analysis shows that the hthreads scheduler module helps in reducing unwanted system overhead and jitter when compared to historical software schedulers, while fielding scheduling requests from both hardware and software threads in parallel with application execution. Run time analysis shows the scheduler achieves constant time scheduling for up to 256 active threads with a total of 128 different priority levels, while using uniform APIs for threads requesting OS services from either side of the hardware/software boundary


symposium/workshop on haskell | 2013

The HERMIT in the machine: a plugin for the interactive transformation of GHC core language programs

Andrew Farmer; Andy Gill; Ed Komp; Neil Sculthorpe

The importance of reasoning about and refactoring programs is a central tenet of functional programming. Yet our compilers and development toolchains only provide rudimentary support for these tasks. This paper introduces a programmatic and compiler-centric interface that facilitates refactoring and equational reasoning. To develop our ideas, we have implemented HERMIT, a toolkit enabling informal but systematic transformation of Haskell programs from inside the Glasgow Haskell Compilers optimization pipeline. With HERMIT, users can experiment with optimizations and equational reasoning, while the tedious heavy lifting of performing the actual transformations is done for them. HERMIT provides a transformation API that can be used to build higher-level rewrite tools. One use-case is prototyping new optimizations as clients of this API before being committed to the GHC toolchain. We describe a HERMIT application - a read-eval-print shell for performing transformations using HERMIT. We also demonstrate using this shell to prototype an optimization on a specific example, and report our initial experiences and remaining challenges.


field-programmable custom computing machines | 2011

Using Functional Programming to Generate an LDPC Forward Error Corrector

Andy Gill; Tristan Bull; Daniel DePardo; Andrew Farmer; Ed Komp; Erik Perrins

FPGAs as commodities offer a resource for high-performance computation that is unmatched in flexibility and price/performance. As a lab, we are interested in high-level descriptions of computation and data, and how they may be customized to map effectively on FPGA fabrics. This paper describes our tool-chain, approach and methodology to FPGA utilization. We give a case study of the generation of a low density parity checking forward error correction algorithm, and discuss the specific challenges we faced with using FPGAs as our target.


Higher-Order and Symbolic Computation archive | 2012

Types and associated type families for hardware simulation and synthesis

Andy Gill; Tristan Bull; Andrew Farmer; Garrin Kimmell; Ed Komp

In this article we overview the design and implementation of the second generation of Kansas Lava. Driven by the needs and experiences of implementing telemetry decoders and other circuits, we have made a number of improvements to both the external API and the internal representations used. We have retained our dual shallow/deep representation of signals in general, but now have a number of externally visible abstractions for combinatorial and sequential circuits, and enabled signals. We introduce these abstractions, as well as our abstractions for reading and writing memory. Internally, we found the need to represent unknown values inside our circuits, so we made aggressive use of associated type families to lift our values to allow unknowns, in a principled and regular way. We discuss this design decision, how it unfortunately complicates the internals of Kansas Lava, and how we mitigate this complexity. Finally, when connecting Kansas Lava to the real world, the standardized idiom of using named input and output ports is provided by Kansas Lava using a new monad, called Fabric. We present the design of this Fabric monad, and illustrate its use in a small but complete example.


trends in functional programming | 2010

Types and Type Families for Hardware Simulation and Synthesis

Andy Gill; Tristan Bull; Andrew Farmer; Garrin Kimmell; Ed Komp

In this paper, we overview the design and implementation of our latest version of Kansas Lava. Driven by needs and experiences of implementing telemetry circuits, we have made a number of recent improvements to both the external API and the internal representations used. We have retained our dual shallow/deep representation of signals in general, but now have a number of externally visible abstractions for combinatorial, sequential, and enabled signals. We introduce these abstractions, as well as our new abstractions for memory and memory updates. Internally, we found the need to represent unknown values inside our circuits, so we made aggressive use of type families to lift our values in a principled and regular way. We discuss this design decision, how it unfortunately complicates the internals of Kansas Lava, and how we mitigate this complexity.

Collaboration


Dive into the Ed Komp's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ron Sass

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge