Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Sorkine-Hornung is active.

Publication


Featured researches published by Alexander Sorkine-Hornung.


international conference on computer graphics and interactive techniques | 2013

Sketch-based generation and editing of quad meshes

Kenshi Takayama; Daniele Panozzo; Alexander Sorkine-Hornung; Olga Sorkine-Hornung

Coarse quad meshes are the preferred representation for animating characters in movies and video games. In these scenarios, artists want explicit control over the edge flows and the singularities of the quad mesh. Despite the significant advances in recent years, existing automatic quad remeshing algorithms are not yet able to achieve the quality of manually created remeshings. We present an interactive system for manual quad remeshing that provides the user with a high degree of control while avoiding the tediousness involved in existing manual tools. With our sketch-based interface the user constructs a quad mesh by defining patches consisting of individual quads. The desired edge flow is intuitively specified by the sketched patch boundaries, and the mesh topology can be adjusted by varying the number of edge subdivisions at patch boundaries. Our system automatically inserts singularities inside patches if necessary, while providing the user with direct control of their topological and geometrical locations. We developed a set of novel user interfaces that assist the user in constructing a curve network representing such patch boundaries. The effectiveness of our system is demonstrated through a user evaluation with professional artists. Our system is also useful for editing automatically generated quad meshes.


Computer Graphics Forum | 2015

Panoramic Video from Unstructured Camera Arrays

Federico Perazzi; Alexander Sorkine-Hornung; Henning Zimmer; Peter Kaufmann; Oliver Wang; Scott Watson; Markus H. Gross

We describe an algorithm for generating panoramic video from unstructured camera arrays. Artifact‐free panorama stitching is impeded by parallax between input views. Common strategies such as multi‐level blending or minimum energy seams produce seamless results on quasi‐static input. However, on video input these approaches introduce noticeable visual artifacts due to lack of global temporal and spatial coherence. In this paper we extend the basic concept of local warping for parallax removal. Firstly, we introduce an error measure with increased sensitivity to stitching artifacts in regions with pronounced structure. Using this measure, our method efficiently finds an optimal ordering of pair‐wise warps for robust stitching with minimal parallax artifacts. Weighted extrapolation of warps in non‐overlap regions ensures temporal stability, while at the same time avoiding visual discontinuities around transitions between views. Remaining global deformation introduced by the warps is spread over the entire panorama domain using constrained relaxation, while staying as close as possible to the original input views. In combination, these contributions form the first system for spatiotemporally stable panoramic video stitching from unstructured camera array input.


computer vision and pattern recognition | 2013

Megastereo: Constructing High-Resolution Stereo Panoramas

Christian Richardt; Yael Pritch; Henning Zimmer; Alexander Sorkine-Hornung

We present a solution for generating high-quality stereo panoramas at mega pixel resolutions. While previous approaches introduced the basic principles, we show that those techniques do not generalise well to todays high image resolutions and lead to disturbing visual artefacts. As our first contribution, we describe the necessary correction steps and a compact representation for the input images in order to achieve a highly accurate approximation to the required ray space. Our second contribution is a flow-based up sampling of the available input rays which effectively resolves known aliasing issues like stitching artefacts. The required rays are generated on the fly to perfectly match the desired output resolution, even for small numbers of input images. In addition, the up sampling is real-time and enables direct interactive control over the desired stereoscopic depth effect. In combination, our contributions allow the generation of stereoscopic panoramas at high output resolutions that are virtually free of artefacts such as seams, stereo discontinuities, vertical parallax and other mono-/stereoscopic shape distortions. Our process is robust, and other types of multiperspective panoramas, such as linear panoramas, can also benefit from our contributions. We show various comparisons and high-resolution results.


computer vision and pattern recognition | 2015

Phase-based frame interpolation for video

Simone Meyer; Oliver Wang; Henning Zimmer; Max Grosse; Alexander Sorkine-Hornung

Standard approaches to computing interpolated (in-between) frames in a video sequence require accurate pixel correspondences between images e.g. using optical flow. We present an efficient alternative by leveraging recent developments in phase-based methods that represent motion in the phase shift of individual pixels. This concept allows in-between images to be generated by simple per-pixel phase modification, without the need for any form of explicit correspondence estimation. Up until now, such methods have been limited in the range of motion that can be interpolated, which fundamentally restricts their usefulness. In order to reduce these limitations, we introduce a novel, bounded phase shift correction method that combines phase information across the levels of a multi-scale pyramid. Additionally, we propose extensions for phase-based image synthesis that yield smoother transitions between the interpolated images. Our approach avoids expensive global optimization typical of optical flow methods, and is both simple to implement and easy to parallelize. This allows us to interpolate frames at a fraction of the computational cost of traditional optical flow-based solutions, while achieving similar quality and in some cases even superior results. Our method fails gracefully in difficult interpolation settings, e.g., significant appearance changes, where flow-based methods often introduce serious visual artifacts. Due to its efficiency, our method is especially well suited for frame interpolation and retiming of high resolution, high frame rate video.


international conference on computer graphics and interactive techniques | 2013

Painting by feature: texture boundaries for example-based image creation

Michal Lukác; Jakub Fišer; Jean Charles Bazin; Ondrej Jamriska; Alexander Sorkine-Hornung; Daniel Sýkora

In this paper we propose a reinterpretation of the brush and the fill tools for digital image painting. The core idea is to provide an intuitive approach that allows users to paint in the visual style of arbitrary example images. Rather than a static library of colors, brushes, or fill patterns, we offer users entire images as their palette, from which they can select arbitrary contours or textures as their brush or fill tool in their own creations. Compared to previous example-based techniques related to the painting-by-numbers paradigm we propose a new strategy where users can generate salient texture boundaries by our randomized graph-traversal algorithm and apply a content-aware fill to transfer textures into the delimited regions. This workflow allows users of our system to intuitively create visually appealing images that better preserve the visual richness and fluidity of arbitrary example images. We demonstrate the potential of our approach in various applications including interactive image creation, editing and vector image stylization.


Computer Graphics Forum | 2013

Finite Element Image Warping

Peter Kaufmann; Oliver Wang; Alexander Sorkine-Hornung; Olga Sorkine-Hornung; Aljoscha Smolic; Markus H. Gross

We introduce a single unifying framework for a wide range of content‐aware image warping tasks using a finite element method (FEM). Existing approaches commonly define error terms over vertex finite differences and can be expressed as a special case of our general FEM model. In this work, we exploit the full generality of FEMs, gaining important advantages over prior methods. These advantages include arbitrary mesh connectivity allowing for adaptive meshing and efficient large‐scale solutions, a well‐defined continuous problem formulation that enables clear analysis of existing warping error functions and allows us to propose improved ones, and higher order basis functions that allow for smoother warps with fewer degrees of freedom. To support per‐element basis functions of varying degree and complex mesh connectivity with hanging nodes, we also introduce a novel use of discontinuous Galerkin FEM. We demonstrate the utility of our method by showing examples in video retargeting and camera stabilization applications, and compare our results with previous state of the art methods.


eurographics | 2015

Path-space motion estimation and decomposition for robust animation filtering

Henning Zimmer; Fabrice Rousselle; Wenzel Jakob; Oliver Wang; David Adler; Wojciech Jarosz; Olga Sorkine-Hornung; Alexander Sorkine-Hornung

Renderings of animation sequences with physics‐based Monte Carlo light transport simulations are exceedingly costly to generate frame‐by‐frame, yet much of this computation is highly redundant due to the strong coherence in space, time and among samples. A promising approach pursued in prior work entails subsampling the sequence in space, time, and number of samples, followed by image‐based spatio‐temporal upsampling and denoising. These methods can provide significant performance gains, though major issues remain: firstly, in a multiple scattering simulation, the final pixel color is the composite of many different light transport phenomena, and this conflicting information causes artifacts in image‐based methods. Secondly, motion vectors are needed to establish correspondence between the pixels in different frames, but it is unclear how to obtain them for most kinds of light paths (e.g. an object seen through a curved glass panel).


Computer Graphics Forum | 2013

Scalable Music: Automatic Music Retargeting and Synthesis

Simon Wenner; Jean Charles Bazin; Alexander Sorkine-Hornung; Changil Kim; Markus H. Gross

In this paper we propose a method for dynamic rescaling of music, inspired by recent works on image retargeting, video reshuffling and character animation in the computer graphics community. Given the desired target length of a piece of music and optional additional constraints such as position and importance of certain parts, we build on concepts from seam carving, video textures and motion graphs and extend them to allow for a global optimization of jumps in an audio signal. Based on an automatic feature extraction and spectral clustering for segmentation, we employ length‐constrained least‐costly path search via dynamic programming to synthesize a novel piece of music that best fulfills all desired constraints, with imperceptible transitions between reshuffled parts. We show various applications of music retargeting such as part removal, decreasing or increasing music duration, and in particular consistent joint video and audio editing.


computer vision and pattern recognition | 2015

Scalable structure from motion for densely sampled videos

Benjamin Resch; Hendrik P. A. Lensch; Oliver Wang; Marc Pollefeys; Alexander Sorkine-Hornung

Videos consisting of thousands of high resolution frames are challenging for existing structure from motion (SfM) and simultaneous-localization and mapping (SLAM) techniques. We present a new approach for simultaneously computing extrinsic camera poses and 3D scene structure that is capable of handling such large volumes of image data. The key insight behind this paper is to effectively exploit coherence in densely sampled video input. Our technical contributions include robust tracking and selection of confident video frames, a novel window bundle adjustment, frame-to-structure verification for globally consistent reconstructions with multi-loop closing, and utilizing efficient global linear camera pose estimation in order to link both consecutive and distant bundle adjustment windows. To our knowledge we describe the first system that is capable of handling high resolution, high frame-rate video data with close to real-time performance. In addition, our approach can robustly integrate data from different video sequences, allowing multiple video streams to be simultaneously calibrated in an efficient and globally optimal way. We demonstrate high quality alignment on large scale challenging datasets, e.g., 2-20 megapixel resolution at frame rates of 25-120 Hz with thousands of frames.


ACM Transactions on Graphics | 2016

Efficient 3D Object Segmentation from Densely Sampled Light Fields with Applications to 3D Reconstruction

Kaan Yücer; Alexander Sorkine-Hornung; Oliver Wang; Olga Sorkine-Hornung

Precise object segmentation in image data is a fundamental problem with various applications, including 3D object reconstruction. We present an efficient algorithm to automatically segment a static foreground object from highly cluttered background in light fields. A key insight and contribution of our article is that a significant increase of the available input data can enable the design of novel, highly efficient approaches. In particular, the central idea of our method is to exploit high spatio-angular sampling on the order of thousands of input frames, for example, captured as a hand-held video, such that new structures are revealed due to the increased coherence in the data. We first show how purely local gradient information contained in slices of such a dense light field can be combined with information about the camera trajectory to make efficient estimates of the foreground and background. These estimates are then propagated to textureless regions using edge-aware filtering in the epipolar volume. Finally, we enforce global consistency in a gathering step to derive a precise object segmentation in both 2D and 3D space, which captures fine geometric details even in very cluttered scenes. The design of each of these steps is motivated by efficiency and scalability, allowing us to handle large, real-world video datasets on a standard desktop computer. We demonstrate how the results of our method can be used for considerably improving the speed and quality of image-based 3D reconstruction algorithms, and we compare our results to state-of-the-art segmentation and multiview stereo methods.

Collaboration


Dive into the Alexander Sorkine-Hornung's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge