Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tiankai Tu is active.

Publication


Featured researches published by Tiankai Tu.


conference on high performance computing (supercomputing) | 2003

High Resolution Forward And Inverse Earthquake Modeling on Terascale Computers

Volkan Akcelik; Jacobo Bielak; George Biros; Ioannis Epanomeritakis; Antonio Fernandez; Omar Ghattas; Eui Joong Kim; Julio Lopez; David R. O'Hallaron; Tiankai Tu; John Urbanic

For earthquake simulations to play an important role in the reduction of seismic risk, they must be capable of high resolution and high fidelity. We have developed algorithms and tools for earthquake simulation based on multiresolution hexahedral meshes. We have used this capability to carry out 1 Hz simulations of the 1994 Northridge earthquake in the LA Basin using 100 million grid points. Our wave propagation solver sustains 1.21 teraflop/s for 4 hours on 3000 AlphaServer processors at 80% parallel efficiency. Because of uncertainties in characterizing earthquake source and basin material properties, a critical remaining challenge is to invert for source and material parameter fields for complex 3D basins from records of past earthquakes. Towards this end, we present results for material and source inversion of high-resolution models of basins undergoing antiplane motion using parallel scalable inversion algorithms that overcome many of the difficulties particular to inverse heterogeneous wave propagation problems.


conference on high performance computing (supercomputing) | 2006

From mesh generation to scientific visualization: an end-to-end approach to parallel supercomputing

Tiankai Tu; Hongfeng Yu; Leonardo Ram'irez-Guzm'an; Jacobo Bielak; Omar Ghattas; Kwan-Liu Ma; David R. O'Hallaron

Parallel supercomputing has traditionally focused on the inner kernel of scientific simulations: the solver. The front and back ends of the simulation pipeline - problem description and interpretation of the output - have taken a back seat to the solver when it comes to attention paid to scalability and performance, and are often relegated to offline, sequential computation. As the largest simulations move beyond the realm of the terascale and into the petascale, this decomposition in tasks and platforms becomes increasingly untenable. We propose an end-to-end approach in which all simulation components - meshing, partitioning, solver, and visualization - are tightly coupled and execute in parallel with shared data structures and no intermediate I/O. We present our implementation of this new approach in the context of octree-based finite element simulation of earthquake ground motion. Performance evaluation on up to 2048 processors demonstrates the ability of the end-to-end approach to overcome the scalability bottlenecks of the traditional approach


conference on high performance computing (supercomputing) | 2005

Scalable Parallel Octree Meshing for TeraScale Applications

Tiankai Tu; David R. O'Hallaron; Omar Ghattas

We present a new methodology for generating and adapting octree meshes for terascale applications. Our approach combines existing methods, such as parallel octree decomposition and space-filling curves, with a set of new methods that address the special needs of parallel octree meshing. We have implemented these techniques in a parallel meshing tool called Octor. Performance evaluations on up to 2000 processors show that Octor has good isogranular scalability, fixed-size scalability, and absolute running time. Octor also provides a novel data access interface to parallel PDE solvers and parallel visualization pipelines, making it possible to develop tightly coupled end-to-end finite element simulations on terascale systems.


international conference on management of data | 2006

Efficient query processing on unstructured tetrahedral meshes

Stratos Papadomanolakis; Anastassia Ailamaki; Julio Lopez; Tiankai Tu; David R. O'Hallaron; Gerd Heber

Modern scientific applications such as fluid dynamics and earthquake modeling heavily depend on massive volumes of data produced by computer simulations. Such applications require new data management capabilities in order to scale to terabyte-scale data volumes. The most common way to discretize the application domain is to decompose it into pyramids, forming an unstructured tetrahedral mesh. Modern simulations generate meshes of high resolution and precision, to be queried by a visualization or analysis tool. Tetrahedral meshes are extremely flexible and therefore vital to accurately model complex geometries, but also are difficult to index. To reduce query execution time, applications either use only subsets of the data or rely on different (less flexible) structures, thereby trading accuracy for speed.This paper presents efficient indexing techniques for common spatial (point and range) on tetrahedral meshes. Because the prevailing multidimensional indexing techniques attempt to approximate the tetrahedra using simpler shapes (primarily rectangles) the query performance deteriorates significantly as a function of the meshs geometric complexity. We develop Directed Local Search (DLS), an efficient indexing algorithm based on mesh topology information that is practically insensitive to the geometric properties of meshes. We show how DLS can be easily and efficiently implemented within modern DBMS without requiring new exotic index structures and complex preprocessing. Finally, we present a new data layout approach for tetrahedral mesh datasets that provides better performance for scientific applications.compared to the traditional space filling curves. In our PostgreSQL implementation DLS reduces the number of disk page accesses by 26% to 4x, and improves the overall query execution time by 25% to 4.


conference on high performance computing (supercomputing) | 2006

Remote runtime steering of integrated terascale simulation and visualization

Tiankai Tu; Hongfeng Yu; Jacobo Bielak; Omar Ghattas; Julio Lopez; Kwan-Liu Ma; David R. O'Hallaron; Leonardo Ram'irez-Guzm'an; Nathan Stone; Ricardo Taborda-Rios; John Urbanic

We have developed a novel analytic capability for scientists and engineers to obtain insight from ongoing large-scale parallel unstructured mesh simulations running on thousands of processors. The breakthrough is made possible by a new approach that visualizes partial differential equation (PDE) solution data simultaneously while a parallel PDE solver executes. The solution field is pipelined directly to volume rendering, which is computed in parallel using the same processors that solve the PDE equations. Because our approach avoids the bottlenecks associated with transferring and storing large volumes of output data, it offers a promising approach to overcoming the challenges of visualization of petascale simulations. The submitted video demonstrates real-time on-the-fly monitoring, interpreting, and steering from a remote laptop computer of a 1024-processor simulation of the 1994 Northridge earthquake in Southern California.


Journal of Physics: Conference Series | 2009

ALPS: A framework for parallel adaptive PDE solution

Carsten Burstedde; Martin Burtscher; Omar Ghattas; Georg Stadler; Tiankai Tu; Lucas C. Wilcox

Adaptive mesh refinement and coarsening (AMR) is essential for the numerical solution of partial differential equations (PDEs) that exhibit behavior over a wide range of length and time scales. Because of the complex dynamic data structures and communication patterns and frequent data exchange and redistribution, scaling dynamic AMR to tens of thousands of processors has long been considered a challenge. We are developing ALPS, a library for dynamic mesh adaptation of PDEs that is designed to scale to hundreds of thousands of compute cores. Our approach uses parallel forest-of-octree-based hexahedral finite element meshes and dynamic load balancing based on space-filling curves. ALPS supports arbitrary-order accurate continuous and discontinuous finite element/spectral element discretizations on general geometries. We present scalability and performance results for two applications from geophysics: seismic wave propagation and mantle convection.


conference on high performance computing (supercomputing) | 2004

A Computational Database System for Generatinn Unstructured Hexahedral Meshes with Billions of Elements

Tiankai Tu; David R. O'Hallaron

For a large class of physical simulations with relatively simple geometries, unstructured octree-based hexahedral meshes provide a good compromise between adaptivity and simplicity. However, generating unstructured hexahedral meshes with over 1 billion elements remains a challenging task. We propose a database approach to solve this problem. Instead of merely storing generated meshes into conventional databases, we have developed a new kind of software system called Computational Database System (CDS) to generate meshes directly on databases. Our basic idea is to extend existing database techniques to organize and index mesh data, and use database-aware algorithms to manipulate database structures and generate meshes. This paper presents the design, implementation, and evaluation of a prototype CDS named Weaver, which has been used successfully by the CMU Quake project to generate queryable high-resolution finite element meshes for earthquake simulations with up to 1.22B elements and 1.37B nodes.


conference on high performance computing (supercomputing) | 2004

Big Wins with Small Application-Aware Caches

Julio Lopez; David R. O'Hallaron; Tiankai Tu

Large datasets, on the order of GB and TB, are increasingly common as abundant computational resources allow practitioners to collect, produce and store data at higher rates. As dataset sizes grow, it becomes more challenging to interactively manipulate and analyze these datasets due to the large amounts of data that need to be moved and processed. Application-independent caches, such as operating system page caches and database buffer caches, are present throughout the memory hierarchy to reduce data access times and alleviate transfer overheads. We claim that an application-aware cache with relatively modest memory requirements can effectively exploit dataset structure and application information to speed access to large datasets. We demonstrate this idea in the context of a system named the tree cache, to reduce query latency to large octree datasets by an order of magnitude.


ieee international conference on high performance computing data and analytics | 2008

Scalable adaptive mantle convection simulation on petascale supercomputers

Carsten Burstedde; Omar Ghattas; Michael Gurnis; Georg Stadler; Eh Tan; Tiankai Tu; Lucas C. Wilcox; Shijie Zhong


Computer Methods in Applied Mechanics and Engineering | 2009

Parallel scalable adjoint-based adaptive solution of variable-viscosity Stokes flow problems

Carsten Burstedde; Omar Ghattas; Georg Stadler; Tiankai Tu; Lucas C. Wilcox

Collaboration


Dive into the Tiankai Tu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Omar Ghattas

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Julio Lopez

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Carsten Burstedde

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Georg Stadler

Courant Institute of Mathematical Sciences

View shared research outputs
Top Co-Authors

Avatar

Jacobo Bielak

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Lucas C. Wilcox

Naval Postgraduate School

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hongfeng Yu

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Kwan-Liu Ma

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge