Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tunc Ozan Aydin is active.

Publication


Featured researches published by Tunc Ozan Aydin.


international conference on computer graphics and interactive techniques | 2012

Practical temporal consistency for image-based graphics applications

Manuel Lang; Oliver Wang; Tunc Ozan Aydin; Aljoscha Smolic; Markus H. Gross

We present an efficient and simple method for introducing temporal consistency to a large class of optimization driven image-based computer graphics problems. Our method extends recent work in edge-aware filtering, approximating costly global regularization with a fast iterative joint filtering operation. Using this representation, we can achieve tremendous efficiency gains both in terms of memory requirements and running time. This enables us to process entire shots at once, taking advantage of supporting information that exists across far away frames, something that is difficult with existing approaches due to the computational burden of video data. Our method is able to filter along motion paths using an iterative approach that simultaneously uses and estimates per-pixel optical flow vectors. We demonstrate its utility by creating temporally consistent results for a number of applications including optical flow, disparity estimation, colorization, scribble propagation, sparse data up-sampling, and visual saliency computation.


IEEE Transactions on Visualization and Computer Graphics | 2015

Automated Aesthetic Analysis of Photographic Images

Tunc Ozan Aydin; Aljoscha Smolic; Markus H. Gross

We present a perceptually calibrated system for automatic aesthetic evaluation of photographic images. Our work builds upon the concepts of no-reference image quality assessment, with the main difference being our focus on rating image aesthetic attributes rather than detecting image distortions. In contrast to the recent attempts on the highly subjective aesthetic judgment problems such as binary aesthetic classification and the prediction of an images overall aesthetics rating, our method aims on providing a reliable objective basis of comparison between aesthetic properties of different photographs. To that end our system computes perceptually calibrated ratings for a set of fundamental and meaningful aesthetic attributes, that together form an “aesthetic signature” of an image. We show that aesthetic signatures can still be used to improve upon the current state-of-the-art in automatic aesthetic judgment, but also enable interesting new photo editing applications such as automated aesthetic analysis, HDR tone mapping evaluation, and providing aesthetic feedback during multi-scale contrast manipulation.


international conference on computer graphics and interactive techniques | 2014

Temporally coherent local tone mapping of HDR video

Tunc Ozan Aydin; Nikolce Stefanoski; Simone Croci; Markus H. Gross; Aljoscha Smolic

Recent subjective studies showed that current tone mapping operators either produce disturbing temporal artifacts, or are limited in their local contrast reproduction capability. We address both of these issues and present an HDR video tone mapping operator that can greatly reduce the input dynamic range, while at the same time preserving scene details without causing significant visual artifacts. To achieve this, we revisit the commonly used spatial base-detail layer decomposition and extend it to the temporal domain. We achieve high quality spatiotemporal edge-aware filtering efficiently by using a mathematically justified iterative approach that approximates a global solution. Comparison with the state-of-the-art, both qualitatively, and quantitatively through a controlled subjective experiment, clearly shows our methods advantages over previous work. We present local tone mapping results on challenging high resolution scenes with complex motion and varying illumination. We also demonstrate our methods capability of preserving scene details at user adjustable scales, and its advantages for low light video sequences with significant camera noise.


Computer Graphics Forum | 2014

Optimizing stereo-to-multiview conversion for autostereoscopic displays

Alexandre Chapiro; Simon Heinzle; Tunc Ozan Aydin; Steven Poulakos; Matthias Zwicker; Aljoscha Smolic; Markus H. Gross

We present a novel stereo‐to‐multiview video conversion method for glasses‐free multiview displays. Different from previous stereo‐to‐multiview approaches, our mapping algorithm utilizes the limited depth range of autostereoscopic displays optimally and strives to preserve the scenes artistic composition and perceived depth even under strong depth compression. We first present an investigation of how perceived image quality relates to spatial frequency and disparity. The outcome of this study is utilized in a two‐step mapping algorithm, where we (i) compress the scene depth using a non‐linear global function to the depth range of an autostereoscopic display and (ii) enhance the depth gradients of salient objects to restore the perceived depth and salient scene structure. Finally, an adapted image domain warping algorithm is proposed to generate the multiview output, which enables overall disparity range extension.


computer vision and pattern recognition | 2017

Designing Effective Inter-Pixel Information Flow for Natural Image Matting

Yagiz Aksoy; Tunc Ozan Aydin; Marc Pollefeys

We present a novel, purely affinity-based natural image matting algorithm. Our method relies on carefully defined pixel-to-pixel connections that enable effective use of information available in the image and the trimap. We control the information flow from the known-opacity regions into the unknown region, as well as within the unknown region itself, by utilizing multiple definitions of pixel affinities. This way we achieve significant improvements on matte quality near challenging regions of the foreground object. Among other forms of information flow, we introduce color-mixture flow, which builds upon local linear embedding and effectively encapsulates the relation between different pixel opacities. Our resulting novel linear system formulation can be solved in closed-form and is robust against several fundamental challenges in natural matting such as holes and remote intricate structures. While our method is primarily designed as a standalone natural matting tool, we show that it can also be used for regularizing mattes obtained by various sampling-based methods. Our evaluation using the public alpha matting benchmark suggests a significant performance improvement over the state-of-the-art.


ACM Transactions on Graphics | 2017

Unmixing-Based Soft Color Segmentation for Image Manipulation

Yagiz Aksoy; Tunc Ozan Aydin; Aljoscha Smolic; Marc Pollefeys

We present a new method for decomposing an image into a set of soft color segments that are analogous to color layers with alpha channels that have been commonly utilized in modern image manipulation software. We show that the resulting decomposition serves as an effective intermediate image representation, which can be utilized for performing various, seemingly unrelated, image manipulation tasks. We identify a set of requirements that soft color segmentation methods have to fulfill, and present an in-depth theoretical analysis of prior work. We propose an energy formulation for producing compact layers of homogeneous colors and a color refinement procedure, as well as a method for automatically estimating a statistical color model from an image. This results in a novel framework for automatic and high-quality soft color segmentation that is efficient, parallelizable, and scalable. We show that our technique is superior in quality compared to previous methods through quantitative analysis as well as visually through an extensive set of examples. We demonstrate that our soft color segments can easily be exported to familiar image manipulation software packages and used to produce compelling results for numerous image manipulation applications without forcing the user to learn new tools and workflows.


ACM Transactions on Graphics | 2016

Interactive High-Quality Green-Screen Keying via Color Unmixing

Yagiz Aksoy; Tunc Ozan Aydin; Marc Pollefeys; Aljosa Smolic

Due to the widespread use of compositing in contemporary feature films, green-screen keying has become an essential part of postproduction workflows. To comply with the ever-increasing quality requirements of the industry, specialized compositing artists spend countless hours using multiple commercial software tools, while eventually having to resort to manual painting because of the many shortcomings of these tools. Due to the sheer amount of manual labor involved in the process, new green-screen keying approaches that produce better keying results with less user interaction are welcome additions to the compositing artist’s arsenal. We found that—contrary to the common belief in the research community—production-quality green-screen keying is still an unresolved problem with its unique challenges. In this article, we propose a novel green-screen keying method utilizing a new energy minimization-based color unmixing algorithm. We present comprehensive comparisons with commercial software packages and relevant methods in literature, which show that the quality of our results is superior to any other currently available green-screen keying solution. It is important to note that, using the proposed method, these high-quality results can be generated using only one-tenth of the manual editing time that a professional compositing artist requires to process the same content having all previous state-of-the-art tools at one’s disposal.


quality of multimedia experience | 2015

A computational model for perception of stereoscopic window violations

Steven Poulakos; Rafael Monroy; Tunc Ozan Aydin; Oliver Wang; Aljoscha Smolic; Markus H. Gross

Creating a computational model for stereoscopic 3D perception is a highly complex undertaking. As one step towards this goal, this paper investigates stereoscopic window violation artifacts, which often interfere with artistic freedom and constrain the comfortable depth volume. Window violations need to be compensated for in most 3D feature movies. Currently this is done in an ad-hoc manner due to a limited understanding of the problem. In this work, we present a model predicting problematic window violations that are visually disturbing. The model parameters were defined through psychophysical experiments on simple stimuli. Then the model was calibrated and validated on real, complex stereoscopic images. Finally, we present a system to provide visualization of problematic stereoscopic window violations as well as details for how to correct them.


international conference on image processing | 2015

Chromatic Calibration of an HDR Display Using 3D Octree Forests

Aljosa Smolic; Nikolce Stefanoski; Tunc Ozan Aydin; Jing LlU; Anselm Grundhöfer

High dynamic range (HDR) display prototypes have been built and used for scientific studies for nearly a decade, and they are now on the verge of entering consumer market. However, problems exist regarding the accurate color reproduction capabilities on these displays. In this paper, we first characterize the color reproduction capability of a state-of-art HDR display through a set of measurements, and present a novel calibration method that takes into account the variation of the chrominance error over HDR displays wide luminance range. Our proposed 3D octree forest data structure for representing and querying the calibration function successfully addresses the challenges in calibrating HDR displays: (i) high computational complexity due to nonlinear chromatic distortions, and (ii) huge storage space demand for a look-up table (35GB vs 100kB). We show that our method achieves high color reproduction accuracy through both objective metrics and a controlled subjective study.


international conference on computational photography | 2015

Contrast-Use Metrics for Tone Mapping Images

Miguel Granados; Tunc Ozan Aydin; J. Rafael Tena; Jean-François Lalonde; Christian Theobalt

Existing tone mapping operators (TMOs) provide good results in well-lit scenes, but often perform poorly on images in low light conditions. In these scenes, noise isprevalent and gets amplified by TMOs, as they confuse contrast created by noise with contrast created by the scene. This paper presents a principled approach to produce tone mapped images with less visible noise. For this purpose, we leverage established models of camera noise and human contrast perception to design two new quality scores: contrast waste and contrast loss, which measure image quality as a function of contrast allocation. To produce tone mappings with less visible noise, we apply these scores in two ways: first, to automatically tune the parameters of existing TMOs to reduce the amount of noise they produce; and second, to propose a new noise-aware tone curve.

Collaboration


Dive into the Tunc Ozan Aydin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge