Kenneth Moreland
Sandia National Laboratories
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kenneth Moreland.
siggraph eurographics conference on graphics hardware | 2003
Kenneth Moreland; Edward Angel
The Fourier transform is a well known and widely used tool in many scientific and engineering fields. The Fourier transform is essential for many image processing techniques, including filtering, manipulation, correction, and compression. As such, the computer graphics community could benefit greatly from such a tool if it were part of the graphics pipeline. As of late, computer graphics hardware has become amazingly cheap, powerful, and flexible. This paper describes how to utilize the current generation of cards to perform the fast Fourier transform (FFT) directly on the cards. We demonstrate a system that can synthesize an image by conventional means, perform the FFT, filter the image, and finally apply the inverse FFT in well under 1 second for a 512 by 512 image. This work paves the way for performing complicated, real-time image processing as part of the rendering pipeline.
ieee symposium on large data analysis and visualization | 2011
Nathan D. Fabian; Kenneth Moreland; David C. Thompson; Andrew C. Bauer; Pat Marion; Berk Gevecik; Michel Rasquin; Kenneth E. Jansen
As high performance computing approaches exascale, CPU capability far outpaces disk write speed, and in situ visualization becomes an essential part of an analysts workflow. In this paper, we describe the ParaView Coprocessing Library, a framework for in situ visualization and analysis coprocessing. We describe how coprocessing algorithms (building on many from VTK) can be linked and executed directly from within a scientific simulation or other applications that need visualization and analysis. We also describe how the ParaView Coprocessing Library can write out partially processed, compressed, or extracted data readable by a traditional visualization application for interactive post-processing. Finally, we will demonstrate the librarys scalability in a number of real-world scenarios.
international symposium on visual computing | 2009
Kenneth Moreland
One of the most fundamental features of scientific visualization is the process of mapping scalar values to colors. This process allows us to view scalar fields by coloring surfaces and volumes. Unfortunately, the majority of scientific visualization tools still use a color map that is famous for its ineffectiveness: the rainbow color map. This color map, which naively sweeps through the most saturated colors, is well known for its ability to obscure data, introduce artifacts, and confuse users. Although many alternate color maps have been proposed, none have achieved widespread adoption by the visualization community for scientific visualization. This paper explores the use of diverging color maps (sometimes also called ratio, bipolar, or double-ended color maps) for use in scientific visualization, provides a diverging color map that generally performs well in scientific visualization applications, and presents an algorithm that allows users to easily generate their own customized color maps.
symposium on volume visualization | 2002
Brian N. Wylie; Kenneth Moreland; Lee Ann Fisk; Patricia Crossno
Projective methods for volume rendering currently represent the best approach for interactive visualization of unstructured data sets. We present a technique for tetrahedral projection using the programmable vertex shaders on current generation commodity graphics cards. The technique is based on Shirley and Tuchmans Projected Tetrahedra (PT) algorithm and allows tetrahedral elements to be volume scan converted within the graphics processing unit. Our technique requires no pre-processing of the data and no additional data structures. Our initial implementation allows interactive viewing of large unstructured datasets on a desktop personal computer.
eurographics workshop on parallel graphics and visualization | 2006
Andy Cedilnik; Berk Geveci; Kenneth Moreland; James P. Ahrens; Jean M. Favre
Scientists are using remote parallel computing resources to run scientific simulations to model a range of scientific problems. Visualization tools are used to understand the massive datasets that result from these simulations. A number of problems need to be overcome in order to create a visualization tool that effectively visualizes these datasets under this scenario. Problems include how to effectively process and display massive datasets and how to effectively communicate data and control information between the geographically distributed computing and visualization resources. We believe a solution that incorporates a data parallel data server, a data parallel rendering server and client controller is key. Using this data server, render server, client model as a basis, this paper describes in detail a set of integrated solutions to remote/distributed visualization problems including presenting an efficient M to N parallel algorithm for transferring geometry data, an effective server interface abstraction and parallel rendering techniques for a range of rendering modalities including tiled display walls and CAVEs.
IEEE Computer Graphics and Applications | 2001
Brian N. Wylie; Constantine Pavlakos; Vasily Lewis; Kenneth Moreland
Sandia National Laboratories use PC clusters and commodity graphics cards to achieve higher rendering performance on extreme data sets. The main obstacle in using cluster-based graphics systems is the difficulty in realizing the full aggregate performance of all the individual graphics accelerators, particularly for very large data sets that exceed the capacity and performance characteristics of any one single node. Based on our efforts to achieve higher performance, we present results from a parallel sort-last implementation that the scalable rendering project at Sandia National Laboratories generated. Our sort-last library (libpglc) can be linked to an existing parallel application to achieve high rendering rates. We ran performance tests on a 64-node PC cluster populated with commodity graphics cards. Applications using libpglc have demonstrated rendering performance of 300 million polygons per second
Proceedings IEEE 2001 Symposium on Parallel and Large-Data Visualization and Graphics (Cat. No.01EX520) | 2001
Kenneth Moreland; Brian N. Wylie; Constantine Pavlakos
approximately two orders of magnitude greater than the performance on an SGI Infinite Reality system for similar applications.
ieee symposium on large data analysis and visualization | 2011
Kenneth Moreland; Utkarsh Ayachit; Berk Geveci; Kwan-Liu Ma
Due to the impressive price-performance of todays PC-based graphics accelerator cards, Sandia National Laboratories is attempting to use PC clusters to render extremely large data sets in interactive applications. This paper describes a sort-last parallel rendering system running on a PC cluster that is capable of rendering enormous amounts of geometry onto high-resolution tile displays by taking advantage of the spatial coherency that is inherent in our data. Furthermore, it is capable of scaling to larger sized input data or higher resolution displays by increasing the size of the cluster. Our prototype is now capable of rendering 120 million triangles per second on a 12 mega-pixel display.
IEEE Transactions on Visualization and Computer Graphics | 2013
Kenneth Moreland
Experts agree that the exascale machine will comprise processors that contain many cores, which in turn will necessitate a much higher degree of concurrency. Software will require a minimum of a 1,000 times more concurrency. Most parallel analysis and visualization algorithms today work by partitioning data and running mostly serial algorithms concurrently on each data partition. Although this approach lends itself well to the concurrency of current high-performance computing, it does not exhibit the appropriate pervasive parallelism required for exascale computing. The data partitions are too small and the overhead of the threads is too large to make effective use of all the cores in an extreme-scale machine. This paper introduces a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. We demonstrate the use of this system on a GPU processor, which we feel is the best analog to an exascale node that we have available today.
Proceedings of the 2nd international workshop on Petascal data analytics: challenges and opportunities | 2011
Kenneth Moreland; Ron A. Oldfield; Pat Marion; Sébastien Jourdain; Norbert Podhorszki; Venkatram Vishwanath; Nathan D. Fabian; Ciprian Docan; Manish Parashar; Mark Hereld; Michael E. Papka; Scott Klasky
The most common abstraction used by visualization libraries and applications today is what is known as the visualization pipeline. The visualization pipeline provides a mechanism to encapsulate algorithms and then couple them together in a variety of ways. The visualization pipeline has been in existence for over 20 years, and over this time many variations and improvements have been proposed. This paper provides a literature review of the most prevalent features of visualization pipelines and some of the most recent research directions.