Berk Geveci
Kitware
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Berk Geveci.
Visualization Handbook | 2005
James P. Ahrens; Berk Geveci
This chapter describes the design and features of a visualization tool called ParaView, a tool that allows scientists to visualize and analyze extremely large datasets. The tool provides a graphical user interface for the creation and dynamic execution of visualization tasks. ParaView transparently supports the visualization and rendering of large datasets by executing these programs in parallel on shared or distributed memory machines. ParaView supports hardware-accelerated parallel rendering and achieves interactive rendering performance via level-of-detail techniques. The design balances and integrates a number of diverse requirements, including the ability to handle large data, ease of use, and extensibility by developers. The chapter describes the requirements that guided the design, identifies the importance of those requirements to scientific users, and discusses key design decisions and tradeoffs.
Physics of Plasmas | 2014
Homa Karimabadi; V. Roytershteyn; H.X. Vu; Yu. A. Omelchenko; J. D. Scudder; William Daughton; A. P. Dimmock; K. Nykyri; Minping Wan; David G. Sibeck; Mahidhar Tatineni; Amit Majumdar; Burlen Loring; Berk Geveci
Global hybrid (electron fluid, kinetic ions) and fully kinetic simulations of the magnetosphere have been used to show surprising interconnection between shocks, turbulence, and magnetic reconnection. In particular, collisionless shocks with their reflected ions that can get upstream before retransmission can generate previously unforeseen phenomena in the post shocked flows: (i) formation of reconnecting current sheets and magnetic islands with sizes up to tens of ion inertial length. (ii) Generation of large scale low frequency electromagnetic waves that are compressed and amplified as they cross the shock. These “wavefronts” maintain their integrity for tens of ion cyclotron times but eventually disrupt and dissipate their energy. (iii) Rippling of the shock front, which can in turn lead to formation of fast collimated jets extending to hundreds of ion inertial lengths downstream of the shock. The jets, which have high dynamical pressure, “stir” the downstream region, creating large scale disturbances ...
IEEE Computer Graphics and Applications | 2001
James P. Ahrens; K. Brislawn; K. Martin; Berk Geveci; C.C. Law; Michael E. Papka
We present an architectural approach based on parallel data streaming to enable visualizations on a parallel cluster. Our approach requires less memory than other visualizations while achieving high code reuse. We implemented our architecture within the Visualization Toolkit (VTK). It includes specific additions to support message passing interfaces (MPIs); memory limit-based streaming of both implicit and explicit topologies; translation of streaming requests between topologies; and passing data and pipeline control between shared, distributed, and mixed memory configurations. The architecture directly supports both sort-first and sort-last parallel rendering.
eurographics workshop on parallel graphics and visualization | 2006
Andy Cedilnik; Berk Geveci; Kenneth Moreland; James P. Ahrens; Jean M. Favre
Scientists are using remote parallel computing resources to run scientific simulations to model a range of scientific problems. Visualization tools are used to understand the massive datasets that result from these simulations. A number of problems need to be overcome in order to create a visualization tool that effectively visualizes these datasets under this scenario. Problems include how to effectively process and display massive datasets and how to effectively communicate data and control information between the geographically distributed computing and visualization resources. We believe a solution that incorporates a data parallel data server, a data parallel rendering server and client controller is key. Using this data server, render server, client model as a basis, this paper describes in detail a set of integrated solutions to remote/distributed visualization problems including presenting an efficient M to N parallel algorithm for transferring geometry data, an effective server interface abstraction and parallel rendering techniques for a range of rendering modalities including tiled display walls and CAVEs.
ieee symposium on large data analysis and visualization | 2011
Kenneth Moreland; Utkarsh Ayachit; Berk Geveci; Kwan-Liu Ma
Experts agree that the exascale machine will comprise processors that contain many cores, which in turn will necessitate a much higher degree of concurrency. Software will require a minimum of a 1,000 times more concurrency. Most parallel analysis and visualization algorithms today work by partitioning data and running mostly serial algorithms concurrently on each data partition. Although this approach lends itself well to the concurrency of current high-performance computing, it does not exhibit the appropriate pervasive parallelism required for exascale computing. The data partitions are too small and the overhead of the threads is too large to make effective use of all the cores in an extreme-scale machine. This paper introduces a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. We demonstrate the use of this system on a GPU processor, which we feel is the best analog to an exascale node that we have available today.
IEEE Transactions on Visualization and Computer Graphics | 2007
John Biddiscombe; Berk Geveci; Ken Martin; Kenneth Moreland; David S. Thompson
Pipeline architectures provide a versatile and efficient mechanism for constructing visualizations, and they have been implemented in numerous libraries and applications over the past two decades. In addition to allowing developers and users to freely combine algorithms, visualization pipelines have proven to work well when streaming data and scale well on parallel distributed- memory computers. However, current pipeline visualization frameworks have a critical flaw: they are unable to manage time varying data. As data flows through the pipeline, each algorithm has access to only a single snapshot in time of the data. This prevents the implementation of algorithms that do any temporal processing such as particle tracing; plotting over time; or interpolation, fitting, or smoothing of time series data. As data acquisition technology improves, as simulation time-integration techniques become more complex, and as simulations save less frequently and regularly, the ability to analyze the time-behavior of data becomes more important. This paper describes a modification to the traditional pipeline architecture that allows it to accommodate temporal algorithms. Furthermore, the architecture allows temporal algorithms to be used in conjunction with algorithms expecting a single time snapshot, thus simplifying software design and allowing adoption into existing pipeline frameworks. Our architecture also continues to work well in parallel distributed-memory environments. We demonstrate our architecture by modifying the popular VTK framework and exposing the functionality to the ParaView application. We use this framework to apply time-dependent algorithms on large data with a parallel cluster computer and thereby exercise a functionality that previously did not exist.
IEEE Computer | 2013
Hank Childs; Berk Geveci; William J. Schroeder; Jeremy S. Meredith; Kenneth Moreland; Christopher M. Sewell; Torsten W. Kuhlen; E.W. Bethel
As the visualization research community reorients its software to address up-coming challenges, it must successfully deal with diverse processor architectures, distributed systems, various data sources, massive parallelism, multiple input and output devices, and interactivity.
IEEE Computer | 2013
Dean N. Williams; T. Bremer; Charles Doutriaux; John Patchett; Sean Williams; Galen M. Shipman; Ross Miller; Dave Pugmire; B. Smith; Chad A. Steed; E. W. Bethel; Hank Childs; H. Krishnan; P. Prabhat; M. Wehner; Cláudio T. Silva; Emanuele Santos; David Koop; Tommy Ellqvist; Jorge Poco; Berk Geveci; Aashish Chaudhary; Andrew C. Bauer; Alexander Pletzer; David A. Kindig; Gerald Potter; Thomas Maxwell
Collaboration across research, government, academic, and private sectors is integrating more than 70 scientific computing libraries and applications through a tailorable provenance framework, empowering scientists to exchange and examine data in novel ways.
ieee vgtc conference on visualization | 2016
Andrew C. Bauer; Hasan Abbasi; James P. Ahrens; Hank Childs; Berk Geveci; Scott Klasky; Kenneth Moreland; Patrick O'Leary; Venkatram Vishwanath; Brad Whitlock; E.W. Bethel
The considerable interest in the high performance computing (HPC) community regarding analyzing and visualization data without first writing to disk, i. e., in situ processing, is due to several factors. First is an I/O cost savings, where data is analyzed/visualized while being generated, without first storing to a filesystem. Second is the potential for increased accuracy, where fine temporal sampling of transient analysis might expose some complex behavior missed in coarse temporal sampling. Third is the ability to use all available resources, CPUs and accelerators, in the computation of analysis products. This STAR paper brings together researchers, developers and practitioners using in situ methods in extreme‐scale HPC with the goal to present existing methods, infrastructures, and a range of computational science and engineering applications using in situ analysis and visualization.
IEEE Computer Graphics and Applications | 2010
James P. Ahrens; Katrin Heitmann; Mark R. Petersen; Jonathan Woodring; Sean Williams; Patricia K. Fasel; Christine Ahrens; Chung-Hsing Hsu; Berk Geveci
This article presents a visualization-assisted process that verifies scientific-simulation codes. Code verification is necessary because scientists require accurate predictions to interpret data confidently. This verification process integrates iterative hypothesis verification with comparative, feature, and quantitative visualization. Following this process can help identify differences in cosmological and oceanographic simulations.