Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benoît Hudson is active.

Publication


Featured researches published by Benoît Hudson.


IMR | 2006

Sparse Voronoi Refinement

Benoît Hudson; Gary L. Miller; Todd Phillips

We present a new algorithm, Sparse Voronoi Refinement, that produces a conformal Delaunay mesh in arbitrary dimension with guaranteed mesh size and quality. Our algorithm runs in output-sensitive time O(n log(L/s)+m), with constants depending only on dimension and on prescribed element shape quality bounds. For a large class of inputs, including integer coordinates, this matches the optimal time bound of Θ(n log n + m). Our new technique uses interleaving: we maintain a sparse mesh as we mix the recovery of input features with the addition of Steiner vertices for quality improvement. This technical report is the long version of an article [HMP06] presented at IMR 2006, and contains full proofs.


symposium on computational geometry | 2010

Topological inference via meshing

Benoît Hudson; Gary L. Miller; Steve Oudot; Donald R. Sheehy

We apply ideas from mesh generation to improve the time and space complexities of computing the full persistent homological information associated with a point cloud <i>P</i> in Euclidean space ℜ<sup><i>d</i></sup>. Classical approaches rely on the Cech, Rips, ±-complex, or witness complex filtrations of <i>P</i>, whose complexities scale up very badly with <i>d</i>. For instance, the ±-complex filtration incurs the <i>n</i> <sup>Ω(<i>d</i>)</sup> size of the Delaunay triangulation, where <i>n</i> is the size of <i>P</i>. The common alternative is to truncate the filtrations when the sizes of the complexes become prohibitive, possibly before discovering the most relevant topological features. In this paper we propose a new collection of filtrations, based on the Delaunay triangulation of a carefully-chosen superset of <i>P</i>, whose sizes are reduced to 2<sup><i>O</i>(<i>d</i>2)</sup><i>n</i>. Our filtrations interleave multiplicatively with the family of offsets of <i>P</i>, so that the persistence diagram of <i>P</i> can be approximated in 2<sup><i>O</i>(<i>d</i>2)</sup><i>n</i><sup>3</sup> time in theory, with a near-linear observed running time in practice. Thus, our approach remains tractable in medium dimensions, say 4 to 10.


acm symposium on parallel algorithms and architectures | 2007

Sparse parallel Delaunay mesh refinement

Benoît Hudson; Gary L. Miller; Todd Phillips

The authors recently introduced the technique of sparse mesh refinement to produce the first near-optimal sequential time bounds of O(n lg L/s+m) for inputs in any fixed dimension with piecewiselinear constraining (PLC) features. This paper extends that work to the parallel case, refining the same inputs in time O(lg(L/s) lgm) on an EREW PRAM while maintaining the work bound; in practice, this means we expect linear speedup for any practical number of processors. This is faster than the best previously known parallel Delaunay mesh refinement algorithms in two dimensions. It is the first technique with work bounds equal to the sequential case. In higher dimension, it is the first provably fast parallel technique for any kind of quality mesh refinement with PLC inputs. Furthermore, the algorithms implementation is straightforward enough that it is likely to be extremely fast in practice.


Computational Geometry: Theory and Applications | 2013

Dynamic well-spaced point sets

Umut A. Acar; Andrew Cotter; Benoît Hudson; Duru Türkoğlu

In a well-spaced point set the Voronoi cells all have bounded aspect ratio. Well-spaced point sets satisfy some important geometric properties and yield quality Voronoi or simplicial meshes that are important in scientific computations. In this paper, we consider the dynamic well-spaced point set problem, which requires constructing a well-spaced superset of a dynamically changing input set, e.g., as input points are inserted or deleted. We present a dynamic algorithm that allows inserting/deleting points into/from the input in O(log@D) time, where @D is the geometric spread, a natural measure that yields an O(logn) bound when input points are represented by log-size words. We show that this algorithm is time-optimal by proving a lower bound of @W(log@D) for a dynamic update. We also show that this algorithm maintains size-optimal outputs: the well-spaced supersets are within a constant factor of the minimum possible size. The asymptotic bounds in our results work in any constant dimensional space. Experiments with a preliminary implementation indicate that dynamic changes may be performed with considerably greater efficiency than re-constructing a well-spaced point set from scratch. To the best of our knowledge, these are the first time- and size-optimal algorithms for dynamically maintaining well-spaced point sets.


acm symposium on parallel algorithms and architectures | 2011

Parallelism in dynamic well-spaced point sets

Umut A. Acar; Andrew Cotter; Benoît Hudson; Duru Türkoğlu

Parallel algorithms and dynamic algorithms possess an interesting duality property: compared to sequential algorithms, parallel algorithms improve run-time while preserving work, while dynamic algorithms improve work but typically offer no parallelism. Although they are often considered separately, parallel and dynamic algorithms employ similar design techniques. They both identify parts of the computation that are independent of each other. This suggests that dynamic algorithms could be parallelized to improve work efficiency while preserving fast parallel run-time. In this paper, we parallelize a dynamic algorithm for well-spaced point sets, an important problem related to mesh refinement in computational geometry. Our parallel dynamic algorithm computes a well-spaced superset of a dynamically changing set of points, allowing arbitrary dynamic modifications to the input set. On an EREW PRAM, our algorithm processes batches of k modifications such as insertions and deletions in O(k log Δ) total work and in O(log Δ) parallel time using k processors, where Δ is the geometric spread of the data, while ensuring that the output is always within a constant factor of the optimal size. EREW PRAM model is quite different from actual hardware such as modern multiprocessors. We therefore describe techniques for implementing our algorithm on modern multi-core computers and provide a prototype implementation. Our empirical evaluation shows that our algorithm can be practical, yielding a large degree of parallelism and good speedups.


algorithm engineering and experimentation | 1999

Accessing the Internal Organization of Data Structures in the JDSL Library

Michael T. Goodrich; Mark Handy; Benoît Hudson; Roberto Tamassia

Many applications require data structures that allow efficient access to their internal organization and to their elements. This feature has been implemented in some libraries with iterators or items. We present an alternative implementation, used in the Library of Data Structures for Java (JDSL). We refine the notion of an item and split it into two related concepts: position and locator. Positions are an abstraction of a pointer to a node or an index into an array; they provide direct access to the in-memory structure of the container. Locators add a level of indirection and allow the user to find a specific element even if the position holding the element changes.


electronic commerce | 2003

Using value queries in combinatorial auctions

Benoît Hudson; Tuomas Sandholm

Combinatorial auctions, where bidders can bid on bundles of items, are known to be desirable auction mechanisms for selling items that are complementary and/or substitutable. However, there are 2k --1 bundles, and each agent may need to bid on all of them to fully express its preferences. We address this by showing how them auctioneer can recommend to the agents incrementally which bundles to bid on so that they need to only place a small fraction of all possible bids. These algorithms impose a great computational burden on the auctioneer; we show how to speed them up dramatically. We also present an optimal elicitor, which is intractable but may be the basis for future algorithms. Finally, we introduce the notion of a universal revelation reducer, demonstrate a randomized one, and prove that no deterministic one exists.The full paper is available in draft form at http://www.cs.cmu.edu/ sandholm/using_value_queries.pdf.


Lecture Notes in Computer Science | 2002

Effectiveness of Preference Elicitation in Combinatorial Auctions

Benoît Hudson; Tuomas Sandholm


Archive | 1999

An Experimental Study of SBH with Gapped Probes

Benoît Hudson


Archive | 2007

Dynamic mesh refinement

Gary L. Miller; Benoît Hudson

Collaboration


Dive into the Benoît Hudson's collaboration.

Top Co-Authors

Avatar

Gary L. Miller

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tuomas Sandholm

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Umut A. Acar

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Andrew Cotter

Toyota Technological Institute at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Todd Phillips

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge