Alexandre Kaspar
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexandre Kaspar.
Computer Graphics Forum | 2015
Alexandre Kaspar; Boris Neubert; Dani Lischinski; Mark Pauly; Johannes Kopf
The goal of example‐based texture synthesis methods is to generate arbitrarily large textures from limited exemplars in order to fit the exact dimensions and resolution required for a specific modeling task. The challenge is to faithfully capture all of the visual characteristics of the exemplar texture, without introducing obvious repetitions or unnatural looking visual elements. While existing non‐parametric synthesis methods have made remarkable progress towards this goal, most such methods have been demonstrated only on relatively low‐resolution exemplars. Real‐world high resolution textures often contain texture details at multiple scales, which these methods have difficulty reproducing faithfully. In this work, we present a new general‐purpose and fully automatic self‐tuning non‐parametric texture synthesis method that extends Texture Optimization by introducing several key improvements that result in superior synthesis ability. Our method is able to self‐tune its various parameters and weights and focuses on addressing three challenging aspects of texture synthesis: (i) irregular large scale structures are faithfully reproduced through the use of automatically generated and weighted guidance channels; (ii) repetition and smoothing of texture patches is avoided by new spatial uniformity constraints; (iii) a smart initialization strategy is used in order to improve the synthesis of regular and near‐regular textures, without affecting textures that do not exhibit regularities. We demonstrate the versatility and robustness of our completely automatic approach on a variety of challenging high‐resolution texture exemplars.
Computer-aided Design | 2015
Bailin Deng; Sofien Bouaziz; Mario Deuss; Alexandre Kaspar; Yuliy Schwartzburg; Mark Pauly
In architectural design, surface shapes are commonly subject to geometric constraints imposed by material, fabrication or assembly. Rationalization algorithms can convert a freeform design into a form feasible for production, but often require design modifications that might not comply with the design intent. In addition, they only offer limited support for exploring alternative feasible shapes, due to the high complexity of the optimization algorithm.We address these shortcomings and present a computational framework for interactive shape exploration of discrete geometric structures in the context of freeform architectural design. Our method is formulated as a mesh optimization subject to shape constraints. Our formulation can enforce soft constraints and hard constraints at the same time, and handles equality constraints and inequality constraints in a unified way. We propose a novel numerical solver that splits the optimization into a sequence of simple subproblems that can be solved efficiently and accurately.Based on this algorithm, we develop a system that allows the user to explore designs satisfying geometric constraints. Our system offers full control over the exploration process, by providing direct access to the specification of the design space. At the same time, the complexity of the underlying optimization is hidden from the user, who communicates with the system through intuitive interfaces. A general optimization framework for deforming meshes under constraints.Soft constraints and hard constraints are handled in a unified way.An efficient parallel solver suitable for interactive applications.A system for exploring the feasible shapes of constrained meshes in real time.
user interface software and technology | 2016
Kiril Vidimče; Alexandre Kaspar; Ye Wang; Wojciech Matusik
We demonstrate a new approach for designing functional material definitions for multi-material fabrication using our system called Foundry. Foundry provides an interactive and visual process for hierarchically designing spatially-varying material properties (e.g., appearance, mechanical, optical). The resulting meta-materials exhibit structure at the micro and macro level and can surpass the qualities of traditional composites. The material definitions are created by composing a set of operators into an operator graph. Each operator performs a volume decomposition operation, remaps space, or constructs and assigns a material composition. The operators are implemented using a domain-specific language for multi-material fabrication; users can easily extend the library by writing their own operators. Foundry can be used to build operator graphs that describe complex, parameterized, resolution-independent, and reusable material definitions. We also describe how to stage the evaluation of the final material definition which in conjunction with progressive refinement, allows for interactive material evaluation even for complex designs. We show sophisticated and functional parts designed with our system.
ACM Transactions on Graphics | 2017
Vahid Babaei; Kiril Vidimče; Michael Foshey; Alexandre Kaspar; Piotr Didyk; Wojciech Matusik
Appearance reproduction is an important aspect of 3D printing. Current color reproduction systems use halftoning methods that create colors through a spatial combination of different inks at the objects surface. This introduces a variety of artifacts to the object, especially when viewed from a closer distance. In this work, we propose an alternative color reproduction method for 3D printing. Inspired by the inherent ability of 3D printers to layer different materials on top of each other, 3D color contoning creates colors by combining inks with various thicknesses inside the objects volume. Since inks are inside the volume, our technique results in a uniform color surface with virtually invisible spatial patterns on the surface. For color prediction, we introduce a simple and highly accurate spectral model that relies on a weighted regression of spectral absorptions. We fully characterize the proposed framework by addressing a number of problems, such as material arrangement, calculation of ink concentration, and 3D dot gain. We use a custom 3D printer to fabricate and validate our results.
acm multimedia | 2015
Kiana Calagari; Mohamed A. Elgharib; Piotr Didyk; Alexandre Kaspar; Wojciech Matusik; Mohamed Hefeeda
A wide spread adoption of 3D videos and technologies is hindered by the lack of high-quality 3D content. One promising solution to address this problem is to use automated 2D-to-3D conversion. However, current conversion methods, while general, produce low-quality results with artifacts that are not acceptable to many viewers. We address this problem by showing how to construct a high-quality, domain-specific conversion method for soccer videos. We propose a novel, data-driven method that generates stereoscopic frames by transferring depth information from similar frames in a database of 3D stereoscopic videos. Creating a database of 3D stereoscopic videos with accurate depth is, however, very difficult. One of the key findings in this paper is showing that computer generated content in current sports computer games can be used to generate high-quality 3D video reference database for 2D-to-3D conversion methods. Once we retrieve similar 3D video frames, our technique transfers depth gradients to the target frame while respecting object boundaries. It then computes depth maps from the gradients, and generates the output stereoscopic video. We implement our method and validate it by conducting user-studies that evaluate depth perception and visual comfort of the converted 3D videos. We show that our method produces high-quality 3D videos that are almost indistinguishable from videos shot by stereo cameras. In addition, our method significantly outperforms the current state-of-the-art method. For example, up to 20% improvement in the perceived depth is achieved by our method, which translates to improving the mean opinion score from Good to Excellent.
2013 Symposium on GPU Computing and Applications | 2015
Alexandre Kaspar; Bailin Deng
Constrained meshes play an important role in free-form architectural design, as they can represent panel layouts on free-form surfaces. It is challenging to perform real-time manipulation on such meshes, because all constraints need to be respected during the deformation while the shape quality needs to be maintained. This usually leads to nonlinear constrained optimization problems, which are challenging to solve in real time. In this chapter, we present a GPU-based shape manipulation tool for constrained meshes, using the parallelizable algorithm proposed in Deng et al. (Computer-Aided Design, 2014). We discuss the main challenges and solutions for the GPU implementation and provide timing comparison against CPU implementations of the algorithm. Our GPU implementation significantly outperforms the CPU version, allowing real-time handle-based deformation for large constrained meshes.
human factors in computing systems | 2018
Alexandre Kaspar; Genevieve Patterson; Changil Kim; Yagiz Aksoy; Wojciech Matusik; Mohamed A. Elgharib
In this work, we propose two ensemble methods leveraging a crowd workforce to improve video annotation, with a focus on video object segmentation. Their shared principle is that while individual candidate results may likely be insufficient, they often complement each other so that they can be combined into something better than any of the individual results---the very spirit of collaborative working. For one, we extend a standard polygon-drawing interface to allow workers to annotate negative space, and combine the work of multiple workers instead of relying on a single best one as commonly done in crowdsourced image segmentation. For the other, we present a method to combine multiple automatic propagation algorithms with the help of the crowd. Such combination requires an understanding of where the algorithms fail, which we gather using a novel coarse scribble video annotation task. We evaluate our ensemble methods, discuss our design choices for them, and make our web-based crowdsourcing tools and results publicly available.
Proceedings of the 1st Annual ACM Symposium on Computational Fabrication | 2017
Vahid Babaei; Kiril Vidimče; Michael Foshey; Alexandre Kaspar; Piotr Didyk; Wojciech Matusik
We introduce a novel color reproduction method for 3D printing. In contrast to currently used halftoning techniques that use surface color mixing, our technique mixes colors inside the printed volume. This mitigates the variety of artifacts associated with color halftoning.
IEEE Transactions on Multimedia | 2017
Kiana Calagari; Mohamed A. Elgharib; Piotr Didyk; Alexandre Kaspar; Wojciech Matusik; Mohamed Hefeeda
A wide adoption of 3-D videos is hindered by the lack of high-quality 3-D content. One promising solution to this problem is through data-driven 2-D-to-3-D video conversion. Such approaches are based on learning depth maps from a large dataset of 2-D+Depth images. However, current conversion methods, while general, produce low-quality results with artifacts that are not acceptable to many viewers. We propose a novel, data-driven method for 2-D-to-3-D video conversion. Our method transfers the depth gradients from a large database of 2-D+Depth images. Capturing 2-D+Depth databases, however, are complex and costly, especially for outdoor sports games. We address this problem by creating a synthetic database from computer games and showing that this synthetic database can effectively be used to convert real videos. We propose a spatio-temporal method to ensure the smoothness of the generated depth within individual frames and across successive frames. In addition, we present an object boundary detection method customized for 2-D-to-3-D conversion systems, which produces clear depth boundaries for players. We implement our method and validate it by conducting user studies that evaluate depth perception and visual comfort of the converted 3-D videos. We show that our method produces high-quality 3-D videos that are almost indistinguishable from videos shot by stereo cameras. In addition, our method significantly outperforms the current state-of-the-art methods. For example, up to 20% improvement in the perceived depth is achieved by our method, which translates to improving the mean opinion score from good to excellent.
arXiv: Computer Vision and Pattern Recognition | 2018
Changil Kim; Hijung Valentina Shin; Tae-Hyun Oh; Alexandre Kaspar; Mohamed A. Elgharib; Wojciech Matusik