Ahmet Oğuz Akyüz
Middle East Technical University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ahmet Oğuz Akyüz.
Computer Graphics Forum | 2015
Okan Tarhan Tursun; Ahmet Oğuz Akyüz; Aykut Erdem; Erkut Erdem
Obtaining a high quality high dynamic range (HDR) image in the presence of camera and object movement has been a long‐standing challenge. Many methods, known as HDR deghosting algorithms, have been developed over the past ten years to undertake this challenge. Each of these algorithms approaches the deghosting problem from a different perspective, providing solutions with different degrees of complexity, solutions that range from rudimentary heuristics to advanced computer vision techniques. The proposed solutions generally differ in two ways: (1) how to detect ghost regions and (2) what to do to eliminate ghosts. Some algorithms choose to completely discard moving objects giving rise to HDR images which only contain the static regions. Some other algorithms try to find the best image to use for each dynamic region. Yet others try to register moving objects from different images in the spirit of maximizing dynamic range in dynamic regions. Furthermore, each algorithm may introduce different types of artifacts as they aim to eliminate ghosts. These artifacts may come in the form of noise, broken objects, under‐ and over‐exposed regions, and residual ghosting. Given the high volume of studies conducted in this field over the recent years, a comprehensive survey of the state of the art is required. Thus, the first goal of this paper is to provide this survey. Secondly, the large number of algorithms brings about the need to classify them. Thus the second goal of this paper is to propose a taxonomy of deghosting algorithms which can be used to group existing and future algorithms into meaningful classes. Thirdly, the existence of a large number of algorithms brings about the need to evaluate their effectiveness, as each new algorithm claims to outperform its precedents. Therefore, the last goal of this paper is to share the results of a subjective experiment which aims to evaluate various state‐of‐the‐art deghosting algorithms.
Journal of Real-time Image Processing | 2015
Ahmet Oğuz Akyüz
Use of high dynamic range (HDR) images and video in image processing and computer graphics applications is rapidly gaining popularity. However, creating and displaying high resolution HDR content on CPUs is a time consuming task. Although some previous work focused on real-time tone mapping, implementation of a full HDR imaging (HDRI) pipeline on the GPU has not been detailed. In this article we aim to fill this gap by providing a detailed description of how the HDRI pipeline, from HDR image assembly to tone mapping, can be implemented exclusively on the GPU. We also explain the trade-offs that need to be made for improving efficiency and show timing comparisons for CPU versus GPU implementations of the HDRI pipeline.
Computer Graphics Forum | 2016
Okan Tarhan Tursun; Ahmet Oğuz Akyüz; Aykut Erdem; Erkut Erdem
Reconstructing high dynamic range (HDR) images of a complex scene involving moving objects and dynamic backgrounds is prone to artifacts. A large number of methods have been proposed that attempt to alleviate these artifacts, known as HDR deghosting algorithms. Currently, the quality of these algorithms are judged by subjective evaluations, which are tedious to conduct and get quickly outdated as new algorithms are proposed on a rapid basis. In this paper, we propose an objective metric which aims to simplify this process. Our metric takes a stack of input exposures and the deghosting result and produces a set of artifact maps for different types of artifacts. These artifact maps can be combined to yield a single quality score. We performed a subjective experiment involving 52 subjects and 16 different scenes to validate the agreement of our quality scores with subjective judgements and observed a concordance of almost 80%. Our metric also enables a novel application that we call as hybrid deghosting, in which the output of different deghosting algorithms are combined to obtain a superior deghosting result.
Computers & Graphics | 2013
Ahmet Oğuz Akyüz; M. Levent Eksert; M. Selin Aydin
Rendering high contrast scenes on display devices with limited dynamic range is a challenging task. Two groups of algorithms have emerged to take up this challenge: tone mapping operators (TMOs) and more recently exposure fusion (EF) techniques. While several formal evaluation studies comparing TMOs exist, no formal evaluation has yet been performed that compares EF techniques with each other or compares them against TMOs. Moreover, with the advancements in hand-held devices and programmable digital cameras it became possible to directly capture and view high dynamic range (HDR) content on these devices which are characterized by their small screens. However, currently very little is known about how to best visualize a high contrast scene on a small screen. Thus the primary goal of this paper is to provide answers to both of these questions by conducting a series of rigorous psychophysical experiments. Our results suggest that the best tone mapping algorithms are generally superior to EF algorithms except for the reproduction of colors. Furthermore, contrary to some previous work, we find that the differences between algorithms are barely perceptible on small screens and therefore one can opt for a simpler solution than a more complex and accurate one.
international conference on computer graphics and interactive techniques | 2013
Ahmet Oğuz Akyüz; Kerem Hadimli; Merve Aydınlılar; Christian Bloch
In this paper we propose a different approach to high dynamic range (HDR) image tone mapping. We put away the assumption that there is a single optimal solution to tone mapping. We argue that tone mapping is inherently a personal process that is guided by the taste and preferences of the artist; different artists can produce different depictions of the same scene. However, most existing tone mapping operators (TMOs) compel the artists to produce similar renderings. Operators that give more freedom to artists require adjustment of many parameters which turns tone mapping into a laborious process. In contrast to these, we propose an algorithm which learns the taste and preferences of an artist from a small set of calibration images. Any new image is then tone mapped to convey the appearance that would be desired by the artist.
Computers & Graphics | 2013
Ahmet Oğuz Akyüz; Asli Genctav
The radiometric response of a camera governs the relationship between the incident light on the camera sensor and the output pixel values that are produced. This relationship, which is typically unknown and nonlinear, needs to be estimated for applications that require accurate measurement of scene radiance. Until now, various camera response recovery algorithms have been proposed each with different merits and drawbacks. However, an evaluation study that compares these algorithms has not been presented. In this work, we aim to fill this gap by conducting a rigorous experiment that evaluates the selected algorithms with respect to three metrics: consistency, accuracy, and robustness. In particular, we seek the answer of the following four questions: (1) Which camera response recovery algorithm gives the most accurate results? (2) Which algorithm produces the camera response most consistently for different scenes? (3) Which algorithm performs better under varying degrees of noise? (4) Does the sRGB assumption hold in practice? Our findings indicate that Grossberg and Nayars (GN) algorithm (2004 [1]) is the most accurate; Mitsunaga and Nayars (MN) algorithm (1999 [2]) is the most consistent; and Debevec and Maliks (DM) algorithm (1997 [3]) is the most resistant to noise together with MN. We also find that the studied algorithms are not statistically better than each other in terms of accuracy although all of them statistically outperform the sRGB assumption. By answering these questions, we aim to help the researchers and practitioners in the high dynamic range (HDR) imaging community to make better choices when choosing an algorithm for camera response recovery.
international conference on computer graphics and interactive techniques | 2011
Asli Genctav; Ahmet Oğuz Akyüz
The camera response function determines the relationship between the incident light on the camera sensor and the output pixel values that are produced. For most consumer cameras, this function is proprietary and needs to be estimated to create HDR images that accurately represent the light distribution of the captured scene. Several methods have been proposed in the literature to estimate this unknown mapping using multiple exposures techniques. In this study, we compare three of the most commonly used methods namely Debevec and Maliks [1997], Mitsunaga and Nayars [1999], and Robertson et al.s [2003] response curve estimation algorithms in terms of how precisely they estimate an unknown camera response.
IEEE Computer Graphics and Applications | 2016
Elena Sikudova; Tania Pouli; Alessandro Artusi; Ahmet Oğuz Akyüz; Francesco Banterle; Zeynep Miray Mazlumoglu; Erik Reinhard
An integrated gamut- and tone-management framework for color-accurate reproduction of high dynamic range images can prevent hue and luminance shifts while taking gamut boundaries into consideration. The proposed approach is conceptually and computationally simple, parameter-free, and compatible with existing tone-mapping operators.
electronic imaging | 2015
Serdar Çiftçi; Pavel Korshunov; Ahmet Oğuz Akyüz; Touradj Ebrahimi
Many privacy protection tools have been proposed for preserving privacy. Tools for protection of visual privacy available today lack either all or some of the important properties that are expected from such tools. Therefore, in this paper, we propose a simple yet effective method for privacy protection based on false color visualization, which maps color palette of an image into a different color palette, possibly after a compressive point transformation of the original pixel data, distorting the details of the original image. This method does not require any prior face detection or other sensitive regions detection and, hence, unlike typical privacy protection methods, it is less sensitive to inaccurate computer vision algorithms. It is also secure as the look-up tables can be encrypted, reversible as table look-ups can be inverted, flexible as it is independent of format or encoding, adjustable as the final result can be computed by interpolating the false color image with the original using different degrees of interpolation, less distracting as it does not create visually unpleasant artifacts, and selective as it preserves better semantic structure of the input. Four different color scales and four different compression functions, one which the proposed method relies, are evaluated via objective (three face recognition algorithms) and subjective (50 human subjects in an online-based study) assessments using faces from FERET public dataset. The evaluations demonstrate that DEF and RBS color scales lead to the strongest privacy protection, while compression functions add little to the strength of privacy protection. Statistical analysis also shows that recognition algorithms and human subjects perceive the proposed protection similarly
signal processing and communications applications conference | 2014
Okan Tarhan Tursun; Ahmet Oğuz Akyüz; Aykut Erdem; Erkut Erdem
The real world encompasses a high range of luminances. In order to capture and represent this range correctly, High Dynamic Range (HDR) imaging techniques are introduced. Some of these techniques are based on constructing an HDR image from several Low Dynamic Range (LDR) images with different exposures. In the capture and reconstruction phases, the HDR reproduction techniques must resolve the differences between the input LDR images due to camera and object movement. In this study, two recent approaches addressing this issue are compared using a novel dataset comprised of image sequences with varying complexity. The results are evaluated by using both objective and subjective measures.