L. Paul Chew
Cornell University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by L. Paul Chew.
Algorithmica | 1989
L. Paul Chew
Given a set ofn vertices in the plane together with a set of noncrossing, straight-line edges, theconstrained Delaunay triangulation (CDT) is the triangulation of the vertices with the following properties: (1) the prespecified edges are included in the triangulation, and (2) it is as close as possible to the Delaunay triangulation. We show that the CDT can be built in optimalO(n logn) time using a divide-and-conquer technique. This matches the time required to build an arbitrary (unconstrained) Delaunay triangulation and the time required to build an arbitrary constrained (non-Delaunay) triagulation. CDTs, because of their relationship with Delaunay triangulations, have a number of properties that make them useful for the finite-element method. Applications also include motion planning in the presence of polygonal obstacles and constrained Euclidean minimum spanning trees, spanning trees subject to the restriction that some edges are prespecified.
symposium on computational geometry | 1993
L. Paul Chew
For several commonly-used solution techniques for partial differential equations, the first step is to divide the problem region into simply-shaped elements, creating a mesh. We present a technique for creating high-quality triangular meshes for regions on curved surfaces. This technique is an extension of previous methods we developed for regions in the plane. For both flat and curved surfaces, the resulting meshes are guaranteed to exhibit the following properties: (1) internal and external boundaries are respected, (2) element shapes are guaranteed—all elements are triangles with angles between 30 and 120 degrees (with the exception of badly shaped elements that may be required by the specified boundary), and (3) element density can be controlled, producing small elements in “interesting” areas and large elements elsewhere. An additional contribution of this paper is the development of a practical generalization of Delaunay triangulation to curved surfaces.
symposium on computational geometry | 1985
L. Paul Chew; Robert L. Dyrsdale Iii
We present an “expanding waves” view of Voronoi diagrams that allows such diagrams to be defined for very general metrics and for distance measures that do not qualify as metrics. If a pebble is dropped into a still pond, circular waves move out from the point of impact. If n pebbles are dropped simultaneously, the places where wave fronts meet define the Voronoi diagram on the n points of impact. The Voronoi diagram for any normed metric, including the Lp metrics, can be obtained by changing the shape of the wave front from a circle to the shape of the “circle” in that metric. (For example, the “circle” in the L1 metric is diamond shaped.) For any convex wave shape there is a corresponding convex distance function. If the shape is not symmetric about its center (a triangle, for example) then the resulting distance function is not a metric, although it can still be used to define a Voronoi diagram. Like Voronoi diagrams based on the Euclidean metric, the Voronoi diagrams based on other normed metrics can be used to solve various closest-point problems (all-nearest-neighbors, minimum spanning trees, etc.). Some of these problems also make sense under convex distance functions which are not metrics. In particular, the “largest empty circle” problem becomes the “largest empty convex shape” problem, and “motion planning for a disc” becomes “motion planning for a convex shape”. These problems can both be solved quickly given the Voronoi diagram. We present an asymptotically optimal algorithm for computing Voronoi diagrams based on convex distance functions.
Computational Geometry: Theory and Applications | 1997
L. Paul Chew; Michael T. Goodrich; Daniel P. Huttenlocher; Klara Kedem; Jon M. Kleinberg; Dina Kravets
Abstract Given two planar sets A and B, we examine the problem of determining the smallest ϵ such that there is a Euclidean motion (rotation and translation) of A that brings each member of A within distance ϵ of some member of B. We establish upper bounds on the combinatorial complexity of this subproblem in model-based computer vision, when the sets A and B contain points, line segments, or (filled-in) polygons. We also show how to use our methods to substantially improve on existing algorithms for finding the minimum Hausdorff distance under Euclidean motion.
european symposium on algorithms | 1996
Srinivasa Rao Arikati; Danny Z. Chen; L. Paul Chew; Gautam Das; Michiel H. M. Smid; Christos D. Zaroliagis
We consider the problem of finding an obstacle-avoiding path between two points s and t in the plane, amidst a set of disjoint polygonal obstacles with a total of n vertices. The length of this path should be within a small constant factor c of the length of the shortest possible obstacle-avoiding s-t path measured in the L p -metric. Such an approximate shortest path is called a c-short path, or a short path with stretch factor c. The goal is to preprocess the obstacle-scattered plane by creating an efficient data structure that enables fast reporting of a c-short path (or its length). In this paper, we give a family of algorithms for the above problem that achieve an interesting trade-off between the stretch factor, the query time and the preprocessing bounds. Our main results are algorithms that achieve logarithmic length query time, after subquadratic time and space preprocessing.
architectural support for programming languages and operating systems | 2008
Milind Kulkarni; Keshav Pingali; Ganesh Ramanarayanan; Bruce Walter; Kavita Bala; L. Paul Chew
Recent studies of irregular applications such as finite-element mesh generators and data-clustering codes have shown that these applications have a generalized data parallelism arising from the use of iterative algorithms that perform computations on elements of worklists. In some irregular applications, the computations on different elements are independent. In other applications, there may be complex patterns of dependences between these computations. The Galois system was designed to exploit this kind of irregular data parallelism on multicore processors. Its main features are (i) two kinds of set iterators for expressing worklist-based data parallelism, and (ii) a runtime system that performs optimistic parallelization of these iterators, detecting conflicts and rolling back computations as needed. Detection of conflicts and rolling back iterations requires information from class implementors. In this paper, we introduce mechanisms to improve the execution efficiency of Galois programs: data partitioning, data-centric work assignment, lock coarsening, and over-decomposition. These mechanisms can be used to exploit locality of reference, reduce mis-speculation, and lower synchronization overhead. We also argue that the design of the Galois system permits these mechanisms to be used with relatively little modification to the user code. Finally, we present experimental results that demonstrate the utility of these mechanisms.
symposium on computational geometry | 1997
L. Paul Chew
The main contribution of this paper is a new mesh generation technique for producing 3D tetrahedral meshes. Like many existing techniques, this one is based on the Delaunay triangulation (DT). Unlike existing techniques, thk is the first Delaunay-based method that is mathematically guaranteed to avoid slivers. A sliver is a tetrahedral mesh-element that is almost completely flat. For example, imagine the tetrahedron created as the (3D) convex hull of the four corners of a square; th~ tetrahedron has nicely shaped faces — all faces are 45 degree right-triangles — but the tetrahedron has zero volume. Slivers in the mesh generally lead to poor numerical accuracy in a finite element analysis. The Delaunay triangulation (DT) has been widely used for mesh generation. In 21), the DT maximizes the minimum angle for a given point set; thus, small angles are avoided. There is no analogous property involving angles in 3D. We make use of the Empty Circle Property for the DT of a set of point sites: the circumcircle of each triangle is empty of all other sites. In 3D, the analogous property holds: the circumsphere of each tetrahedron is empty of all other sites. The Empty Circle Property can be used as the definition of the DT. There is a vsst literature on mesh generation with most of the material emanating from the various applications communities. We refer the reader to the excellent survey by Bern and Eppstein [BE92]. We consider here only work related to the topic of mesh generation with mathematical quality guarantees. Chew [Che89] showed how to use the DT to triangulate any 2D region with smooth boundaries and no sharp corners to attain a mesh of uniform density in which all angles are greater than 30 degrees. An optimality theorem for meshes of nonuniform density was developed by Bern, Eppstein and Gilbert [BEG94] using a quadtree-based approach. Ruppert [Ru93] later showed that a modification of Chew’s algorithm could also attain the same strong results
symposium on computational geometry | 2002
Démian Nave; Nikos Chrisochoides; L. Paul Chew
We describe a distributed memory parallel Delaunay refinement algorithm for polyhedral domains which can generate meshes containing tetrahedra with circumradius to shortest edge ratio less than 2, as long as the angle separating any two incident segments and/or facets is between 90° and 270° degrees. Input to our implementation is an element--wise partitioned, conforming Delaunay mesh of a restricted polyhedral domain which has been distributed to the processors of a parallel system. The submeshes of the distributed mesh are then independently refined by concurrently inserting new mesh vertices.Our algorithm allows a new mesh vertex to affect both the submesh tetrahedralizations and the submesh interfaces induced by the partitioning. This flexibility is crucial to ensure mesh quality, but it introduces unpredictable and variable latencies due to long delays in gathering remote data required for updating mesh data structures. In our experiments, more than 80% of this latency was masked with computation due to the fine--grained concurrency of our algorithm.Our experiments also show that the algorithm is efficient in practice, even for certain domains whose boundaries do not conform to the theoretical limits imposed by the algorithm. The algorithm we describe is the first step in the development of much more sophisticated guaranteed--quality parallel mesh generation algorithms.
acm symposium on parallel algorithms and architectures | 2008
Milind Kulkarni; Patrick Carribault; Keshav Pingali; Ganesh Ramanarayanan; Bruce Walter; Kavita Bala; L. Paul Chew
Recent application studies have shown that many irregular applications have a generalized data parallelism that manifests itself as iterative computations over worklists of different kinds. In general, there are complex dependencies between iterations. These dependencies cannot be elucidated statically because they depend on the inputs to the program; thus, optimistic parallel execution is the only tractable approach to parallelizing these applications. We have built a system called Galois that supports this style of parallel execution. Its main features are (i) set iterators for expressing worklist-based data parallelism, and (ii) a runtime system that performs optimistic parallelization of these iterators, detecting conflicts and rolling back computations as needed. Our work builds on the Galois system, and it addresses the problem of scheduling iterations of set iterators on multiple cores. The policy used by the base Galois system is to assign an iteration to a core whenever it needs work to do, but we show in this paper that this policy is not optimal for many applications. We also argue that OpenMP-style DO-ALL loop scheduling directives such as chunked and guided self-scheduling are too simplistic for irregular programs. These difficulties led us to develop a general scheduling framework for irregular problems; OpenMP-style scheduling strategies are special cases of this general approach. We also provide hooks into our framework, allowing the programmer to leverage application knowledge to further tune a schedule for a particular application. To evaluate this framework, we implemented it as an extension of the Galois system. We then tested the system using five real-world, irregular, data-parallel applications. Our results show that (i) the optimal scheduling policy can be different for different applications and often leverages application-specific knowledge and (ii) implementing these schedules in the Galois system is relatively straightforward.
Proteins | 1999
Klara Kedem; L. Paul Chew; Ron Elber
The Unit‐vector RMS (URMS) is a new technique to compare protein chains and to detect similarities of chain segments. It is limited to comparison of Cα chains. However, it has a number of unique features that include exceptionally weak dependence on the length of the chain and efficient detection of substructure similarities. Two molecular dynamics simulations of proteins in the neighborhood of their native states are used to test the performance of the URMS. The first simulation is of a solvated myoglobin and the second is of the protein MHC. In accord with previous studies the secondary structure elements (helices or sheets) are found to be moving relatively rigidly among flexible loops. In addition to these tests, folding trajectories of C peptides are analyzed, revealing a folding nucleus of seven amino acids. Proteins 1999;37:554–564. ©1999 Wiley‐Liss, Inc.