Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karol Myszkowski is active.

Publication


Featured researches published by Karol Myszkowski.


eurographics | 2003

Adaptive Logarithmic Mapping For Displaying High Contrast Scenes

Frédéric Drago; Karol Myszkowski; Thomas Annen; Norishige Chiba

We propose a fast, high quality tone mapping technique to display high contrast images on devices with limited dynamicrange of luminance values. The method is based on logarithmic compression of luminance values, imitatingthe human response to light. A bias power function is introduced to adaptively vary logarithmic bases, resultingin good preservation of details and contrast. To improve contrast in dark areas, changes to the gamma correctionprocedure are proposed. Our adaptive logarithmic mapping technique is capable of producing perceptually tunedimages with high dynamic content and works at interactive speed. We demonstrate a successful application of ourtone mapping technique with a high dynamic range video player enabling to adjust optimal viewing conditions forany kind of display while taking into account user preference concerning brightness, contrast compression, anddetail reproduction.


human vision and electronic imaging conference | 2005

Predicting visible differences in high dynamic range images: model and its calibration

Rafal Mantiuk; Scott J. Daly; Karol Myszkowski; Hans-Peter Seidel

New imaging and rendering systems commonly use physically accurate lighting information in the form of high-dynamic range (HDR) images and video. HDR images contain actual colorimetric or physical values, which can span 14 orders of magnitude, instead of 8-bit renderings, found in standard images. The additional precision and quality retained in HDR visual data is necessary to display images on advanced HDR display devices, capable of showing contrast of 50,000:1, as compared to the contrast of 700:1 for LCD displays. With the development of high-dynamic range visual techniques comes a need for an automatic visual quality assessment of the resulting images. In this paper we propose several modifications to the Visual Difference Predicator (VDP). The modifications improve the prediction of perceivable differences in the full visible range of luminance and under the adaptation conditions corresponding to real scene observation. The proposed metric takes into account the aspects of high contrast vision, like scattering of the light in the optics (OTF), nonlinear response to light for the full range of luminance, and local adaptation. To calibrate our HDR VDP we perform experiments using an advanced HDR display, capable of displaying the range of luminance that is close to that found in real scenes.


international conference on computer graphics and interactive techniques | 2004

Perception-motivated high dynamic range video encoding

Rafal Mantiuk; Grzegorz Krawczyk; Karol Myszkowski; Hans-Peter Seidel

Due to rapid technological progress in high dynamic range (HDR) video capture and display, the efficient storage and transmission of such data is crucial for the completeness of any HDR imaging pipeline. We propose a new approach for inter-frame encoding of HDR video, which is embedded in the well-established MPEG-4 video compression standard. The key component of our technique is luminance quantization that is optimized for the contrast threshold perception in the human visual system. The quantization scheme requires only 10--11 bits to encode 12 orders of magnitude of visible luminance range and does not lead to perceivable contouring artifacts. Besides video encoding, the proposed quantization provides perceptually-optimized luminance sampling for fast implementation of any global tone mapping operator using a lookup table. To improve the quality of synthetic video sequences, we introduce a coding scheme for discrete cosine transform (DCT) blocks with high contrast. We demonstrate the capabilities of HDR video in a player, which enables decoding, tone mapping, and applying post-processing effects in real-time. The tone mapping algorithm as well as its parameters can be changed interactively while the video is playing. We can simulate post-processing effects such as glare, night vision, and motion blur, which appear very realistic due to the usage of HDR data.


international conference on computer graphics and interactive techniques | 2006

Backward compatible high dynamic range MPEG video compression

Rafal Mantiuk; Alexander Efremov; Karol Myszkowski; Hans-Peter Seidel

To embrace the imminent transition from traditional low-contrast video (LDR) content to superior high dynamic range (HDR) content, we propose a novel backward compatible HDR video compression (HDR MPEG) method. We introduce a compact reconstruction function that is used to decompose an HDR video stream into a residual stream and a standard LDR stream, which can be played on existing MPEG decoders, such as DVD players. The reconstruction function is finely tuned to the content of each HDR frame to achieve strong decorrelation between the LDR and residual streams, which minimizes the amount of redundant information. The size of the residual stream is further reduced by removing invisible details prior to compression using our HDR-enabled filter, which models luminance adaptation, contrast sensitivity, and visual masking based on the HDR content. Designed especially for DVD movie distribution, our HDR MPEG compression method features low storage requirements for HDR content resulting in a 30% size increase to an LDR video sequence. The proposed compression method does not impose restrictions or modify the appearance of the LDR or HDR video. This is important for backward compatibility of the LDR stream with current DVD appearance, and also enables independent fine tuning, tone mapping, and color grading of both streams.


international conference on computer graphics and interactive techniques | 2008

Dynamic range independent image quality assessment

Tunç Ozan Aydin; Rafal Mantiuk; Karol Myszkowski; Hans-Peter Seidel

The diversity of display technologies and introduction of high dynamic range imagery introduces the necessity of comparing images of radically different dynamic ranges. Current quality assessment metrics are not suitable for this task, as they assume that both reference and test images have the same dynamic range. Image fidelity measures employed by a majority of current metrics, based on the difference of pixel intensity or contrast values between test and reference images, result in meaningless predictions if this assumption does not hold. We present a novel image quality metric capable of operating on an image pair where both images have arbitrary dynamic ranges. Our metric utilizes a model of the human visual system, and its central idea is a new definition of visible distortion based on the detection and classification of visible changes in the image structure. Our metric is carefully calibrated and its performance is validated through perceptual experiments. We demonstrate possible applications of our metric to the evaluation of direct and inverse tone mapping operators as well as the analysis of the image appearance on displays with various characteristics.


human vision and electronic imaging conference | 2005

Perceptual evaluation of tone mapping operators with real-world scenes

Akiko Yoshida; Volker Blanz; Karol Myszkowski; Hans-Peter Seidel

A number of successful tone mapping operators for contrast compression have been proposed due to the need to visualize high dynamic range (HDR) images on low dynamic range devices. They were inspired by fields as diverse as image processing, photographic practice, and modeling of the human visual systems (HVS). The variety of approaches calls for a systematic perceptual evaluation of their performance. We conduct a psychophysical experiment based on a direct comparison between the appearance of real-world scenes and HDR images of these scenes displayed on a low dynamic range monitor. In our experiment, HDR images are tone mapped by seven existing tone mapping operators. The primary interest of this psychophysical experiment is to assess the differences in how tone mapped images are perceived by human observers and to find out which attributes of image appearance account for these differences when tone mapped images are compared directly with their corresponding real-world scenes rather than with each other. The human subjects rate image naturalness, overall contrast, overall brightness, and detail reproduction in dark and bright image regions with respect to the corresponding real-world scene. The results indicate substantial differences in perception of images produced by individual tone mapping operators. We observe a clear distinction between global and local operators in favor of the latter, and we classify the tone mapping operators according to naturalness and appearance attributes.


Computer Graphics Forum | 2008

Apparent Greyscale: A Simple and Fast Conversion to Perceptually Accurate Images and Video

Kaleigh Smith; Pierre-Edouard Landes; Joëlle Thollot; Karol Myszkowski

This paper presents a quick and simple method for converting complex images and video to perceptually accurate greyscale versions. We use a two‐step approach first to globally assign grey values and determine colour ordering, then second, to locally enhance the greyscale to reproduce the original contrast. Our global mapping is image independent and incorporates the Helmholtz‐Kohlrausch colour appearance effect for predicting differences between isoluminant colours. Our multiscale local contrast enhancement reintroduces lost discontinuities only in regions that insufficiently represent original chromatic contrast. All operations are restricted so that they preserve the overall image appearance, lightness range and differences, colour ordering, and spatial details, resulting in perceptually accurate achromatic reproductions of the colour original.


international conference on computer graphics and interactive techniques | 2001

Perception-guided global illumination solution for animation rendering

Karol Myszkowski; Takehiro Tawara; Hiroyuki Akamine; Hans-Peter Seidel

We present a method for efficient global illumination computation in dynamic environments by taking advantage of temporal coherence of lighting distribution. The method is embedded in the framework of stochastic photon tracing and density estimation techniques. A locally operating energy-based error metric is used to prevent photon processing in the temporal domain for the scene regions in which lighting distribution changes rapidly. A perception-based error metric suitable for animation is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained. Furthermore, the computation cost is reduced compared to the traditional approaches operating solely in the spatial domain.


international conference on computer graphics and interactive techniques | 2011

A perceptual model for disparity

Piotr Didyk; Tobias Ritschel; Elmar Eisemann; Karol Myszkowski; Hans-Peter Seidel

Binocular disparity is an important cue for the human visual system to recognize spatial layout, both in reality and simulated virtual worlds. This paper introduces a perceptual model of disparity for computer graphics that is used to define a metric to compare a stereo image to an alternative stereo image and to estimate the magnitude of the perceived disparity change. Our model can be used to assess the effect of disparity to control the level of undesirable distortions or enhancements (introduced on purpose). A number of psycho-visual experiments are conducted to quantify the mutual effect of disparity magnitude and frequency to derive the model. Besides difference prediction, other applications include compression, and re-targeting. We also present novel applications in form of hybrid stereo images and backward-compatible stereo. The latter minimizes disparity in order to convey a stereo impression if special equipment is used but produces images that appear almost ordinary to the naked eye. The validity of our model and difference metric is again confirmed in a study.


systems, man and cybernetics | 2004

Visible difference predicator for high dynamic range images

Rafal Mantiuk; Karol Myszkowski; Hans-Peter Seidel

Since new imaging and rendering systems commonly use physically accurate lighting information in the form of high-dynamic range data, there is a need for an automatic visual quality assessment of the resulting images. In this work we extend the visual difference predictor (VDP) developed by Daly to handle HDR data. This let us predict if a human observer is able to perceive differences for a pair of HDR images under the adaptation conditions corresponding to the real scene observation.

Collaboration


Dive into the Karol Myszkowski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tobias Ritschel

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elmar Eisemann

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Petr Kellnhofer

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge