Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oliver Wang is active.

Publication


Featured researches published by Oliver Wang.


international conference on computer graphics and interactive techniques | 2010

Nonlinear disparity mapping for stereoscopic 3D

Manuel Lang; Alexander Hornung; Oliver Wang; Steven Poulakos; Aljoscha Smolic; Markus H. Gross

This paper addresses the problem of remapping the disparity range of stereoscopic images and video. Such operations are highly important for a variety of issues arising from the production, live broadcast, and consumption of 3D content. Our work is motivated by the observation that the displayed depth and the resulting 3D viewing experience are dictated by a complex combination of perceptual, technological, and artistic constraints. We first discuss the most important perceptual aspects of stereo vision and their implications for stereoscopic content creation. We then formalize these insights into a set of basic disparity mapping operators. These operators enable us to control and retarget the depth of a stereoscopic scene in a nonlinear and locally adaptive fashion. To implement our operators, we propose a new strategy based on stereoscopic warping of the input video streams. From a sparse set of stereo correspondences, our algorithm computes disparity and image-based saliency estimates, and uses them to compute a deformation of the input views so as to meet the target disparities. Our approach represents a practical solution for actual stereo production and display that does not require camera calibration, accurate dense depth maps, occlusion handling, or inpainting. We demonstrate the performance and versatility of our method using examples from live action post-production, 3D display size adaptation, and live broadcast. An additional user study and ground truth comparison further provide evidence for the quality and practical relevance of the presented work.


international conference on computer graphics and interactive techniques | 2012

Practical temporal consistency for image-based graphics applications

Manuel Lang; Oliver Wang; Tunc Ozan Aydin; Aljoscha Smolic; Markus H. Gross

We present an efficient and simple method for introducing temporal consistency to a large class of optimization driven image-based computer graphics problems. Our method extends recent work in edge-aware filtering, approximating costly global regularization with a fast iterative joint filtering operation. Using this representation, we can achieve tremendous efficiency gains both in terms of memory requirements and running time. This enables us to process entire shots at once, taking advantage of supporting information that exists across far away frames, something that is difficult with existing approaches due to the computational burden of video data. Our method is able to filter along motion paths using an iterative approach that simultaneously uses and estimates per-pixel optical flow vectors. We demonstrate its utility by creating temporally consistent results for a number of applications including optical flow, disparity estimation, colorization, scribble propagation, sparse data up-sampling, and visual saliency computation.


international conference on computer vision | 2015

Fully Connected Object Proposals for Video Segmentation

Federico Perazzi; Oliver Wang; Markus H. Gross; Alexander Sorkine-Hornung

We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.


Computers & Graphics | 2010

Technical Section: Real-time temporal shaping of high-speed video streams

Martin Fuchs; Tongbo Chen; Oliver Wang; Ramesh Raskar; Hans-Peter Seidel; Hendrik P. A. Lensch

Digital movie cameras only perform a discrete sampling of real-world imagery. While spatial sampling effects are well studied in the literature, there has not been as much work in regards to temporal sampling. As cameras get faster and faster, the need for conventional frame-rate video that matches the abilities of human perception remains. In this article, we introduce a system with controlled temporal sampling behavior. It transforms a high fps input stream into a conventional speed output video in real-time. We investigate the effect of different temporal sampling kernels and demonstrate that extended, overlapping kernels can mitigate aliasing artifacts. Furthermore, NPR effects, such as enhanced motion blur, can be achieved. By applying Fourier transforms in the temporal domain, we can also obtain novel tools for analyzing and visualizing time dependent effects. We study the properties of both contemporary and idealized display devices and demonstrate the effect of different sampling kernels in creating enhanced movies and stills of fast motion.


computer vision and pattern recognition | 2016

Bilateral Space Video Segmentation

Nicolas Märki; Federico Perazzi; Oliver Wang; Alexander Sorkine-Hornung

In this work, we propose a novel approach to video segmentation that operates in bilateral space. We design a new energy on the vertices of a regularly sampled spatiotemporal bilateral grid, which can be solved efficiently using a standard graph cut label assignment. Using a bilateral formulation, the energy that we minimize implicitly approximates long-range, spatio-temporal connections between pixels while still containing only a small number of variables and only local graph edges. We compare to a number of recent methods, and show that our approach achieves state-of-the-art results on multiple benchmarks in a fraction of the runtime. Furthermore, our method scales linearly with image size, allowing for interactive feedback on real-world high resolution video.


computer vision and pattern recognition | 2017

High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis

Chao Yang; Xin Lu; Zhe Lin; Eli Shechtman; Oliver Wang; Hao Li

Recent advances in deep learning have shown exciting promise in filling large holes in natural images with semantically plausible and context aware details, impacting fundamental image manipulation tasks such as object removal. While these learning-based methods are significantly more effective in capturing high-level features than prior techniques, they can only handle very low-resolution inputs due to memory limitations and difficulty in training. Even for slightly larger images, the inpainted regions would appear blurry and unpleasant boundaries become visible. We propose a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network. We evaluate our method on the ImageNet and Paris Streetview datasets and achieved state-of-the-art inpainting accuracy. We show our approach produces sharper and more coherent results than prior methods, especially for high-resolution images.


pacific conference on computer graphics and applications | 2007

Automatic Natural Video Matting with Depth

Oliver Wang; Jonathan Finger; Qingxiong Yang; James Davis; Ruigang Yang

We complete and bring together two pairs of surface constructions that use polynomial pieces of degree (3,3) to associate a smooth surface with a mesh. The two pairs complement each other in that one extends the subdivisionmodeling paradigm, the other the NURBS patch approach to free-form modeling. Both Catmull-Clark [3] and polar subdivision [7] generalize bi-cubic spline subdivision. Together, they form a powerful combination for smooth object design: while Catmull-Clark subdivision is more suitable where few facets join, polar subdivision nicely models regions where many facets join, as when capping extruded features. We show how to easily combine the meshes of these two generalizations of bi-cubic spline subdivision. A related but different generalization of bi-cubic splines is to model non-tensor-product configurations by a finite set of smoothly connected bi-cubic patches. PCCM [12] does so for layouts where Catmull-Clark would apply. We show that a single NURBS patch can be used where polar subdivision would be applied. This spline is singularly parametrized, but, using a novel technique, we show that the surface is C1 and has bounded curvatures.Video matting is the process of taking a sequence of frames, isolating the foreground, and replacing the background in each frame. We look at existing single-frame matting techniques and present a method that improves upon them by adding depth information acquired by a time-offlight range scanner. We use the depth information to automate the process so it can be practically used for video sequences. In addition, we show that we can improve the results from natural matting algorithms by adding a depth channel. The additional depth information allows us to reduce the artifacts that arise from ambiguities that occur when an object is a similar color to its background.


Computer Graphics Forum | 2007

Synthetic Shutter Speed Imaging

Jacob Telleen; Anne Sullivan; Jerry Yee; Oliver Wang; Prabath Gunawardane; Ian Collins; James Davis

Hand held long exposures often result in blurred photographs due to camera shake. Long exposures are desirable both for artistic effect and in low‐light situations. We propose a novel method for digitally reducing imaging artifacts, which does not require additional hardware such as tripods or optical image stabilization lenses. A series of photographs is acquired with short shutter times, stabilized using image alignment, and then composited. Our method is capable of reducing noise and blurring due to camera shake, while simultaneously preserving the desirable effects of motion blur. The resulting images are very similar to those obtained using a tripod and a true extended exposure.


computer vision and pattern recognition | 2009

Material classification using BRDF slices

Oliver Wang; Prabath Gunawardane; Steven Scher; James Davis

Segmenting images into distinct material types is a very useful capability. Most work in image segmentation addresses the case where only a single image is available. Some methods improve on this by collecting HDR or multispectral images. However, it is also possible to use the reflectance properties of the materials to obtain better results. By acquiring many images of an object under different lighting conditions we have more samples of the surfaces bidirectional reflectance distribution function (BRDF). We show that this additional information enlarges the class of material types that can be well separated by segmentation, and that properly treating the information as samples of the BRDF further increases accuracy without requiring an explicit estimation of the material BRDF.


Computer Graphics Forum | 2015

Panoramic Video from Unstructured Camera Arrays

Federico Perazzi; Alexander Sorkine-Hornung; Henning Zimmer; Peter Kaufmann; Oliver Wang; Scott Watson; Markus H. Gross

We describe an algorithm for generating panoramic video from unstructured camera arrays. Artifact‐free panorama stitching is impeded by parallax between input views. Common strategies such as multi‐level blending or minimum energy seams produce seamless results on quasi‐static input. However, on video input these approaches introduce noticeable visual artifacts due to lack of global temporal and spatial coherence. In this paper we extend the basic concept of local warping for parallax removal. Firstly, we introduce an error measure with increased sensitivity to stitching artifacts in regions with pronounced structure. Using this measure, our method efficiently finds an optimal ordering of pair‐wise warps for robust stitching with minimal parallax artifacts. Weighted extrapolation of warps in non‐overlap regions ensures temporal stability, while at the same time avoiding visual discontinuities around transitions between views. Remaining global deformation introduced by the warps is spread over the entire panorama domain using constrained relaxation, while staying as close as possible to the original input views. In combination, these contributions form the first system for spatiotemporally stable panoramic video stitching from unstructured camera array input.

Collaboration


Dive into the Oliver Wang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Davis

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge