Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohamed S. Ebeida is active.

Publication


Featured researches published by Mohamed S. Ebeida.


international conference on computer graphics and interactive techniques | 2011

Efficient maximal poisson-disk sampling

Mohamed S. Ebeida; Andrew A. Davidson; Anjul Patney; Patrick M. Knupp; Scott A. Mitchell; John D. Owens

We solve the problem of generating a uniform Poisson-disk sampling that is both maximal and unbiased over bounded non-convex domains. To our knowledge this is the first provably correct algorithm with time and space dependent only on the number of points produced. Our method has two phases, both based on classical dart-throwing. The first phase uses a background grid of square cells to rapidly create an unbiased, near-maximal covering of the domain. The second phase completes the maximal covering by calculating the connected components of the remaining uncovered voids, and by using their geometry to efficiently place unbiased samples that cover them. The second phase converges quickly, overcoming a common difficulty in dart-throwing methods. The deterministic memory is O(n) and the expected running time is O(n log n), where n is the output size, the number of points in the final sample. Our serial implementation verifies that the log n dependence is minor, and nearly O(n) performance for both time and memory is achieved in practice. We also present a parallel implementation on GPUs to demonstrate the parallel-friendly nature of our method, which achieves 2.4x the performance of our serial version.


Computer Graphics Forum | 2012

A Simple Algorithm for Maximal Poisson-Disk Sampling in High Dimensions

Mohamed S. Ebeida; Scott A. Mitchell; Anjul Patney; Andrew A. Davidson; John D. Owens

We provide a simple algorithm and data structures for d‐dimensional unbiased maximal Poisson‐disk sampling. We use an order of magnitude less memory and time than the alternatives. Our results become more favorable as the dimension increases. This allows us to produce bigger samplings. Domains may be non‐convex with holes. The generated point cloud is maximal up to round‐off error. The serial algorithm is provably bias‐free. For an output sampling of size n in fixed dimension d, we use a linear memory budget and empirical θ(n) runtime. No known methods scale well with dimension, due to the “curse of dimensionality.” The serial algorithm is practical in dimensions up to 5, and has been demonstrated in 6d. We have efficient GPU implementations in 2d and 3d. The algorithm proceeds through a finite sequence of uniform grids. The grids guide the dart throwing and track the remaining disk‐free area. The top‐level grid provides an efficient way to test if a candidate dart is disk‐free. Our uniform grids are like quadtrees, except we delay splits and refine all leaves at once. Since the quadtree is flat it can be represented using very little memory: we just need the indices of the active leaves and a global level. Also it is very simple to sample from leaves with uniform probability.


IMR | 2011

Uniform Random Voronoi Meshes

Mohamed S. Ebeida; Scott A. Mitchell

We generate Voronoi meshes over three dimensional domains with prescribed boundaries. Voronoi cells are clipped at one-sided domain boundaries. The seeds of Voronoi cells are generated by maximal Poisson-disk sampling. In contrast to centroidal Voronoi tessellations, our seed locations are unbiased. The exception is some bias near concave features of the boundary to ensure well-shaped cells. The method is extensible to generating Voronoi cells that agree on both sides of two-sided internal boundaries.


Computer-aided Design | 2011

Efficient and good Delaunay meshes from random points

Mohamed S. Ebeida; Scott A. Mitchell; Andrew A. Davidson; Anjul Patney; Patrick M. Knupp; John D. Owens

We present a Conforming Delaunay Triangulation (CDT) algorithm based on maximal Poisson disk sampling. Points are unbiased, meaning the probability of introducing a vertex in a disk-free subregion is proportional to its area, except in a neighborhood of the domain boundary. In contrast, Delaunay refinement CDT algorithms place points dependent on the geometry of empty circles in intermediate triangulations, usually near the circle centers. Unconstrained angles in our mesh are between 30? and 120?, matching some biased CDT methods. Points are placed on the boundary using a one-dimensional maximal Poisson disk sampling. Any triangulation method producing angles bounded away from 0? and 180? must have some bias near the domain boundary to avoid placing vertices infinitesimally close to the boundary.Random meshes are preferred for some simulations, such as fracture simulations where cracks must follow mesh edges, because deterministic meshes may introduce non-physical phenomena. An ensemble of random meshes aids simulation validation. Poisson-disk triangulations also avoid some graphics rendering artifacts, and have the blue-noise property.We mesh two-dimensional domains that may be non-convex with holes, required points, and multiple regions in contact. Our algorithm is also fast and uses little memory. We have recently developed a method for generating a maximal Poisson distribution of n output points, where n = ? ( Area / r 2 ) and r is the sampling radius. It takes O ( n ) memory and O ( n log n ) expected time; in practice the time is nearly linear. This, or a similar subroutine, generates our random points. Except for this subroutine, we provably use O ( n ) time and space. The subroutine gives the location of points in a square background mesh. Given this, the neighborhood of each point can be meshed independently in constant time. These features facilitate parallel and GPU implementations. Our implementation works well in practice as illustrated by several examples and comparison to Triangle. Highlights? Conforming Delaunay triangulation algorithm based on maximal Poisson-disk sampling. ? Angles between 30? and 120?. ? Two-dimensional non-convex domains with holes, planar straight-line graphs. ? O ( n ) space, E ( n log n ) time; efficient in practice. Background squares ensure all computations are local.


Archive | 2014

Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

Brian M. Adams; Mohamed S. Ebeida; Michael S. Eldred; John Davis Jakeman; Laura Painton Swiler; John Adam Stephens; Dena M. Vigil; Timothy Michael Wildey; William J. Bohnhoff; John P. Eddy; Kenneth T. Hu; Keith R. Dalbey; Lara E Bauman; Patricia Diane Hough

The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of Dakota’s iterative analysis capabilities. Dakota Version 6.1 Theory Manual generated on November 7, 2014


high performance graphics | 2012

High-quality parallel depth-of-field using line samples

Stanley Tzeng; Anjul Patney; Andrew A. Davidson; Mohamed S. Ebeida; Scott A. Mitchell; John D. Owens

We present a parallel method for rendering high-quality depth-of-field effects using continuous-domain line samples, and demonstrate its high performance on commodity GPUs. Our method runs at interactive rates and has very low noise. Our exploration of the problem carefully considers implementation alternatives, and transforms an originally unbounded storage requirement to a small fixed requirement using heuristics to maintain quality. We also propose a novel blur-dependent level-of-detail scheme that helps accelerate rendering without undesirable artifacts. Our method consistently runs 4 to 5x faster than an equivalent point sampler with better image quality. Our method draws parallels to related work in rendering multi-fragment effects.


Computer-aided Design | 2014

Improving spatial coverage while preserving the blue noise of point sets

Mohamed S. Ebeida; Muhammad A. Awad; Xiaoyin Ge; Ahmed H. Mahmoud; Scott A. Mitchell; Patrick M. Knupp; Li-Yi Wei

We explore the notion of a Well-spaced Blue-noise Distribution (WBD) of points, which combines two desirable properties. First, the point distribution is random, as measured by its spectrum having blue noise. Second, it is well-spaced in the sense that the minimum separation distance between samples is large compared to the maximum coverage distance between a domain point and a sample, i.e. its Voronoi cell aspect ratios 2@b^i are small. It is well known that maximizing one of these properties destroys the other: uniform random points have no aspect ratio bound, and the vertices of an equilateral triangular tiling have no randomness. However, we show that there is a lot of room in the middle to get good values for both. Maximal Poisson-disk sampling provides @b=1 and blue noise. We show that a standard optimization technique can improve the well-spacedness while preserving randomness. Given a random point set, our Opt-@b^i algorithm iterates over the points, and for each point locally optimizes its Voronoi cell aspect ratio 2@b^i. It can improve @b^i to a large fraction of the theoretical bound given by a structured tiling: improving from 1.0 to around 0.8, about half-way to 0.58, while preserving most of the randomness of the original set. In terms of both @b and randomness, the output of Opt-@b^i compares favorably to alternative point improvement techniques, such as centroidal Voronoi tessellation with a constant density function, which do not target @b directly. We demonstrate the usefulness of our output through meshing and filtering applications. An open problem is constructing from scratch a WBD distribution with a guarantee of @b<1.


ACM Transactions on Graphics | 2014

k -d Darts: Sampling by k -dimensional flat searches

Mohamed S. Ebeida; Anjul Patney; Scott A. Mitchell; Keith R. Dalbey; Andrew A. Davidson; John D. Owens

We formalize sampling a function using k-d darts. A k-d Dart is a set of independent, mutually orthogonal, k-dimensional hyperplanes called k-d flats. A dart has d choose k flats, aligned with the coordinate axes for efficiency. We show k-d darts are useful for exploring a functions properties, such as estimating its integral, or finding an exemplar above a threshold. We describe a recipe for converting some algorithms from point sampling to k-d dart sampling, if the function can be evaluated along a k-d flat.n We demonstrate that k-d darts are more efficient than point-wise samples in high dimensions, depending on the characteristics of the domain: for example, the subregion of interest has small volume and evaluating the function along a flat is not too expensive. We present three concrete applications using line darts (1-d darts): relaxed maximal Poisson-disk sampling, high-quality rasterization of depth-of-field blur, and estimation of the probability of failure from a response surface for uncertainty quantification. Line darts achieve the same output fidelity as point sampling in less time. For Poisson-disk sampling, we use less memory, enabling the generation of larger point distributions in higher dimensions. Higher-dimensional darts provide greater accuracy for a particular volume estimation problem.


IMR | 2010

Q-TRAN: A New Approach to Transform Triangular Meshes into Quadrilateral Meshes Locally

Mohamed S. Ebeida; Kaan Karamete; Eric L. Mestreau; Saikat Dey

Q-Tran is a new indirect algorithmto transform triangular tessellation of bounded three-dimensional surfaces into all-quadrilateralmeshes. The proposed method is simple, fast and produces quadrilaterals with provablygood quality and hence it does not require a smoothing post-processing step. The method is capable of identifying and recovering structured regions in the input tessellation. The number of generated quadrilaterals tends to be almost the same as the number of the triangles in the input tessellation. Q-Tran preserves the vertices of the input tessellation and hence the geometry is preserved even for highly curved surfaces. Several examples of Q-Tran are presented to demonstrate the efficiency of the proposed method.


symposium on geometry processing | 2016

Disk density tuning of a maximal random packing

Mohamed S. Ebeida; Ahmad Rushdi; Muhammad A. Awad; Ahmed H. Mahmoud; Dong-Ming Yan; Shawn Allen English; John D. Owens; Chandrajit L. Bajaj; Scott A. Mitchell

We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson‐disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more‐aggressive local operations. We may achieve a user‐defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict‐free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially‐varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non‐obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.

Collaboration


Dive into the Mohamed S. Ebeida's collaboration.

Top Co-Authors

Avatar

Scott A. Mitchell

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

John D. Owens

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ahmad Rushdi

University of California

View shared research outputs
Top Co-Authors

Avatar

Patrick M. Knupp

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chandrajit L. Bajaj

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Laura Painton Swiler

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ahmad A. Rushdi

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge