Mark C. Miller
Lawrence Livermore National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mark C. Miller.
ieee visualization | 1997
Mark A. Duchaineau; Murray Wolinsky; David E. Sigeti; Mark C. Miller; Charles Aldrich; Mark Mineev-Weinstein
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor simulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adapting Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAMs execution time is proportional to the number of triangle changes per frame, which is typically a few percent of the output mesh size; hence ROAMs performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.
ieee visualization | 2005
Hank Childs; Eric Brugger; Kathleen S. Bonnell; Jeremy S. Meredith; Mark C. Miller; Brad Whitlock; Nelson L. Max
VisIt is a richly featured visualization tool that is used to visualize some of the largest simulations ever run. The scale of these simulations requires that optimizations are incorporated into every operation VisIt performs. But the set of applicable optimizations that VisIt can perform is dependent on the types of operations being done. Complicating the issue, VisIt has a plugin capability that allows new, unforeseen components to be added, making it even harder to determine which optimizations can be applied. We introduce the concept of a contract to the standard data flow network design. This contract enables each component of the data flow network to modify the set of optimizations used. In addition, the contract allows for new components to be accommodated gracefully within VisIts data flow network system.
ACM Transactions on Mathematical Software | 2010
Carl Ollivier-Gooch; Lori Freitag Diachin; Mark S. Shephard; Timothy J. Tautges; Jason A. Kraftcheck; Vitus J. Leung; Xiaojuan Luo; Mark C. Miller
Much of the effort required to create a new simulation code goes into developing infrastructure for mesh data manipulation, adaptive refinement, design optimization, and so forth. This infrastructure is an obvious target for code reuse, except that implementations of these functionalities are typically tied to specific data structures. In this article, we describe a software component---an abstract data model and programming interface---designed to provide low-level mesh query and manipulation support for meshing and solution algorithms. The component’s data model provides a data abstraction, completely hiding all details of how mesh data is stored, while its interface defines how applications can interact with that data. Because the component has been carefully designed to be general purpose and efficient, it provides a practical platform for implementing high-level mesh operations independently of the underlying mesh data structures. After describing the data model and interface, we provide several usage examples, each of which has been used successfully with multiple implementations of the interface functionality. The overhead due to accessing mesh data through the interface rather than directly accessing the underlying mesh data is shown to be acceptably small.
international conference on computational science | 2001
Mark C. Miller; James F. Reus; Robb P. Matzke; William Arrighi; Larry A. Schoof; Ray T. Hitt; Peter K. Espen
This paper describes the Sets and Fields (SAF) scientific data modeling system; a revolutionary approach to interoperation of high performance, scientific computing applications based upon rigorous, math-oriented data modeling principles. Previous technologies have required all applications to use the same data structures and/or meshes to represent scientific data or lead to an ever expanding set of incrementally different data structures and/or meshes. SAF addresses this problem by providing a small set of mathematical building blocks--sets, relations and fields--out of which a wide variety of scientific data can be characterized. Applications literally model their data by assembling these building blocks. A short historical perspective, a conceptual model and an overview of SAF along with preliminary results from its use in a few ASCI codes are discussed.
Journal of Physics: Conference Series | 2007
L Diachin; A Bauer; B Fix; Jason A. Kraftcheck; Kenneth E. Jansen; Xiaojuan Luo; Mark C. Miller; Carl Ollivier-Gooch; Mark S. Shephard; Timothy J. Tautges; Harold E. Trease
SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. The Center for Interoperable Technologies for Advanced Petascale Simulations (ITAPS) will deliver interoperable and interchangeable mesh, geometry, and field manipulation services that are of direct use to SciDAC applications. The premise of our technology development goal is to provide such services as libraries that can be used with minimal intrusion into application codes. To develop these technologies, we focus on defining a common data model and data-structure neutral interfaces that unify a number of different services such as mesh generation and improvement, front tracking, adaptive mesh refinement, shape optimization, and solution transfer operations. We highlight the use of several ITAPS services in SciDAC applications.
Journal of Physics: Conference Series | 2009
Karen Dragon Devine; Lori Freitag Diachin; Jason A. Kraftcheck; Kenneth E. Jansen; Vitus J. Leung; Xiaojuan Luo; Mark C. Miller; Carl Ollivier-Gooch; Aleksandr Ovcharenko; Onkar Sahni; Mark S. Shephard; Timothy J. Tautges; Ting Xie; Min Zhou
SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. In this paper, we describe a software component – an abstract data model and programming interface – designed to provide support for parallel unstructured mesh operations. We describe key issues that must be addressed to successfully provide high-performance, distributed-memory unstructured mesh services and highlight some recent research accomplishments in developing new load balancing and MPI-based communication libraries appropriate for leadership class computing. Finally, we give examples of the use of parallel adaptive mesh modification in two SciDAC applications.
Journal of Physics: Conference Series | 2007
Kenneth I. Joy; Mark C. Miller; Hank Childs; E. Wes Bethel; John Clyne; George Ostrouchov; Sean Ahern
The challenges of visualization at the extreme scale involve issues of scale, complexity, temporal exploration and uncertainty. The Visualization and Analytics Center for Enabling Technologies (VACET) focuses on leveraging scientific visualization and analytics software technology as an enabling technology to increased scientific discovery and insight. In this paper, we introduce new uses of visualization frameworks through the introduction of Equivalence Class Functions (ECFs). These functions give a new class of derived quantities designed to greatly expand the ability of the end user to explore and visualize data. ECFs are defined over equivalence classes (i.e., groupings) of elements from an original mesh, and produce summary values for the classes as output. ECFs can be used in the visualization process to directly analyze data, or can be used to synthesize new derived quantities on the original mesh. The design of ECFs enable a parallel implementation that allows the use of these techniques on massive data sets that require parallel processing.
2016 1st Joint International Workshop on Parallel Data Storage and data Intensive Scalable Computing Systems (PDSW-DISCS) | 2016
James S. Dickson; Steven A. Wright; Satheesh Maheswaran; Andy Herdmant; Mark C. Miller; Stephen A. Jarvis
Large scale simulation performance is dependent on a number of components, however the task of investigation and optimization has long favored computational and communication elements above I/O. Manually extracting the pattern of I/O behavior from a parent application is a useful way of working to address performance issues on a per-application basis, but developing workflows with some degree of automation and flexibility provides a more powerful approach to tackling current and future I/O challenges. In this paper we describe a workload replication workflow that extracts the I/O pattern of an application and recreates its behavior with a flexible proxy application. We demonstrate how simple lightweight characterization can be translated to provide an effective representation of a physics application, and show how a proxy replication can be used as a tool for investigating I/O library paradigms.
ieee visualization | 1997
Michael Cox; Roger Crawfis; Bernd Hamann; C. Hansen; Mark C. Miller
Author(s): Cox, M. B.; Crawfis, Roger; Hamann, Bernd; Hanson, C.; Miller, Mark | Editor(s): Yagel, R.; Hagen, Hans | Abstract: Massively Parallel Supercomputers are once again quickly outpacing our ability to organize, manage and understand the prodigious amounts of data they generate. Graphics technology and algorithms have greatly aided in analyzing the modest datasets of years past, but rarely with enough interactivity to squelch the endusers exploratory questions. Will computer graphics and scientific visualization or even computional science proceed as status quo or are new paradigm sifts needed? What is the architecture of tomorrows high-end visualization systems? How much data can we even expect to pull off of these masively parallel machines? What are the new computer graphics technologies that can aid in teracale visulaization? This panel, leverging the panelists past experience and their current knowledge of the field, will provide visions (or dilemmas) for what the next stage or stages of scientific visualization and data management will look like.
Physics of Fluids | 2005
William H. Cabot; Andrew W. Cook; Paul L. Miller; Daniel E. Laney; Mark C. Miller; Hank Childs