James P. Durbano
University of Delaware
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by James P. Durbano.
IEEE Antennas and Wireless Propagation Letters | 2003
James P. Durbano; Fernando E. Ortiz; John R. Humphrey; Mark S. Mirotznik; Dennis W. Prather
In order to take advantage of the significant benefits afforded by computational electromagnetic techniques, such as the finite-difference time-domain (FDTD) method, solvers capable of analyzing realistic problems in a reasonable time frame are required. Although software-based solvers are frequently used, they are often too slow to be of practical use. To speed up computations, hardware-based implementations of the FDTD method have recently been proposed. Although these designs are functionally correct, to date, they have not provided a practical and scalable solution. To this end, we have developed an architecture that not only overcomes the limitations of previous accelerators, but also represents the first three-dimensional FDTD accelerator implemented in physical hardware. We present a high-level view of the system architecture and describe the basic functionality of each module involved in the computational flow. We then present our implementation results and compare them with current PC-based FDTD solutions. These results indicate that hardware solutions will, in the near future, surpass existing PC throughputs, and will ultimately rival the performance of PC clusters.
ieee antennas and propagation society international symposium | 2004
James P. Durbano; John R. Humphrey; Fernando E. Ortiz; Petersen F. Curt; Dennis W. Prather; Mark S. Mirotznik
Although the importance of fast, accurate computational electromagnetic (CEM) solvers is readily apparent, how to construct them is not. By nature, CEM algorithms are both computationally and memory intensive. Furthermore, the serial nature of most software-based implementations does not take advantage of the inherent parallelism found in many CEM algorithms. In an attempt to exploit parallelism, supercomputers and computer clusters are employed. However, these solutions can be prohibitively expensive and frequently impractical. Thus, a CEM accelerator or CEM co-processor would provide the community with much-needed processing power. This would enable iterative designs and designs that would otherwise be impractical to analyze. To this end, we are developing a full-3D, hardware-based accelerator for the finite-difference time-domain (FDTD) method (K.S. Yee, IEEE Trans. Antennas and Propag., vol. 14, pp. 302-307, 1966). This accelerator provides speedups of up to three orders of magnitude over single-PC solutions and will surpass the throughputs of the PC clusters. In this paper, we briefly summarize previous work in this area, where it has fallen short, and how our work fills the void. We then describe the current status of this project, summarizing our achievements to date and the work that remains. We conclude with the projected results of our accelerator.
Proceedings of SPIE | 2006
Eric J. Kelmelis; James P. Durbano; John R. Humphrey; Fernando E. Ortiz; Petersen F. Curt
Designing nanoscale devices presents a number of unique challenges. As device features shrink, the computational demands of the simulations necessary to accurately model them increase significantly. This is a result of not only the increasing level of detail in the device design itself, but also the need to use more accurate models. The approximations that are generally made when dealing with larger devices break down as feature sizes decrease. This can be seen in the optics field when contrasting the complexity of physical optics models with those requiring a rigorous solution to Maxwells equations. This added complexity leads to more demanding calculations, stressing computational resources and driving research to overcome these limitations. There are traditionally two means of improving simulation times as model complexity grows beyond available computational resources: modifying the underlying algorithms to maintain sufficient precision while reducing overall computations and increasing the power of the computational system. In this paper, we explore the latter. Recent advances in commodity hardware technologies, particularly field-programmable gate arrays (FPGAs) and graphics processing units (GPUs), have allowed the creation of desktop-style devices capable of outperforming PC clusters. We will describe the key hardware technologies required to build such a device and then discuss their application to the modeling and simulation of nanophotonic devices. We have found that FPGAs and GPUs can be used to significantly reduce simulation times and allow for the solution of much large problems.
field-programmable custom computing machines | 2003
James P. Durbano; Fernando E. Ortiz; John R. Humphrey; Dennis W. Prather; Mark S. Mirotznik
Maxwells equations, which govern electromagnetic propagation, are a system of coupled, differential equations. As such, they can be represented in difference form, thus allowing their numerical solution. By implementing both the temporal and spatial derivatives of Maxwells equations in difference form, we arrive at one of the most common computational electromagnetic algorithms, the Finite-Difference Time-Domain (FDTD) method (Yee, 1966). In this technique, the region of interest is sampled to generate a grid of points, hereafter referred to as a mesh. The discretized form of Maxwells equations is then solved at each point in the mesh to determine the associated electromagnetic fields. In this extended abstract, we present an architecture that overcomes the previous limitations. We begin with a high-level description of the computational flow of this architecture.
Proceedings of SPIE, the International Society for Optical Engineering | 2006
Eric J. Kelmelis; John R. Humphrey; James P. Durbano; Fernando E. Ortiz
The performance of modeling and simulation tools is inherently tied to the platform on which they are implemented. In most cases, this platform is a microprocessor, either in a desktop PC, PC cluster, or supercomputer. Microprocessors are used because of their familiarity to developers, not necessarily their applicability to the problems of interest. We have developed the underlying techniques and technologies to produce supercomputer performance from a standard desktop workstation for modeling and simulation applications. This is accomplished through the combined use of graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and standard microprocessors. Each of these platforms has unique strengths and weaknesses but, when used in concert, can rival the computational power of a high-performance computer (HPC). By adding a powerful GPU and our custom designed FPGA card to a commodity desktop PC, we have created simulation tools capable of replacing massive computer clusters with a single workstation. We present this work in its initial embodiment: simulators for electromagnetic wave propagation and interaction. We discuss the trade-offs of each independent technology, GPUs, FPGAs, and microprocessors, and how we efficiently partition algorithms to take advantage of the strengths of each while masking their weaknesses. We conclude by discussing enhancing the computational performance of the underlying desktop supercomputer and extending it to other application areas.
Proceedings of SPIE | 2006
Fernando E. Ortiz; James P. Durbano; Eric J. Kelmelis; Michael R. Bodnar
Synthetic Aperture Radar (SAR) techniques employ radar waves to generate high-resolution images in all illumination/weather conditions. The onboard implementation of the image reconstruction algorithms allows for the transmission of real-time video feeds, rather than raw radar data, from unmanned aerial vehicles (UAVs), saving significant communication bandwidth. This in turn saves power, enables longer missions, and allows the transmission of more useful information to the ground. For this application, we created a hardware architecture for a portable implementation of the motion compensation algorithms, which are more computationally intensive than the SAR reconstruction itself, and without which the quality of the SAR images is severely degraded, rendering them unusable.
Proceedings of SPIE | 2006
Fernando E. Ortiz; Carmen J. Carrano; Eric J. Kelmelis; James P. Durbano
In this paper, we discuss the real-time compensation of air turbulence in imaging through long atmospheric paths. We propose the use of a reconfigurable hardware platform, specifically field-programmable gate arrays (FPGAs), to reduce costs and development time, as well as increase flexibility and reusability. We present the results of our acceleration efforts to date (40x speedup) and our strategy to achieve a real-time, atmospheric compensation solver for high-definition video signals.
Passive Millimeter-Wave Imaging Technology X | 2007
Fernando E. Ortiz; Eric J. Kelmelis; James P. Durbano; Dennis W. Prather
Superresolution reconstruction (SR-REC) algorithms combine multiple frames captured using spatially under-sampled imagers to produce a single higher-resolution image. Sub-pixel information is gained from natural motion within the image instead of active pixel scanning (dithering/micro-scanning), eliminating the reliability issues and power consumption associated with moving parts. One of the major computational challenges associated with SR-REC methods is the estimation of the optical flow of the image (i.e., determining the unknown pixel shifts between consecutive frames). A linear least squares approximation is the simplest method for estimating the pixel movements from the captured data, but the size of the problem (directly proportional to the number of pixels in the image) creates a computational bottleneck, which in turn limits the usability of this algorithm in real-time portable systems. We propose the use of a reconfigurable platform to implement these computations in a low power/size environment, suitable for integration into portable millimeter wave imagers.
ieee antennas and propagation society international symposium | 2006
Ahmed Sharkawy; James P. Durbano; Shouyuan Shi; Fernando E. Ortiz; Petersen F. Curt
To accurately design and simulate left-handed material (LHM) structures, it is necessary to examine a large number of unit cells. Such an analysis requires a numerical platform capable of handling computationally intense problems. To this end, a hardware-based solver was used to analyze an LHM structure composed of split-ring resonators (SRR) and wires. Specifically, the hardware accelerator was used to calculate the transmission spectra of the LHM in order to identify the frequency regions where the permittivity and permeability are negative. From these simulations, we confirmed a negative refractive index exists at specific frequencies for this structure. Thus, we have demonstrated that a hardware-based solver enables the analysis of LHM structures that would otherwise be impractical with standard software simulation suites
ieee antennas and propagation society international symposium | 2006
James P. Durbano; Fernando E. Ortiz; Ahmed Sharkawy; Michael R. Bodnar
This paper introduces a computational electromagnetic (CEM)-solver benchmarking suite, called CEMPACK. CEMPACK consists of several synthetic benchmark problems that can be used to characterize CEM-solver implementations. The problems are synthetic in that they do not necessarily correspond to physically useful problems or scenarios. Rather, they attempt to stress various aspects of a solver implementation, including maximum problem size, absorbing boundary conditions, and various source types