Piotr Didyk
Saarland University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Piotr Didyk.
international conference on computer graphics and interactive techniques | 2011
Piotr Didyk; Tobias Ritschel; Elmar Eisemann; Karol Myszkowski; Hans-Peter Seidel
Binocular disparity is an important cue for the human visual system to recognize spatial layout, both in reality and simulated virtual worlds. This paper introduces a perceptual model of disparity for computer graphics that is used to define a metric to compare a stereo image to an alternative stereo image and to estimate the magnitude of the perceived disparity change. Our model can be used to assess the effect of disparity to control the level of undesirable distortions or enhancements (introduced on purpose). A number of psycho-visual experiments are conducted to quantify the mutual effect of disparity magnitude and frequency to derive the model. Besides difference prediction, other applications include compression, and re-targeting. We also present novel applications in form of hybrid stereo images and backward-compatible stereo. The latter minimizes disparity in order to convey a stereo impression if special equipment is used but produces images that appear almost ordinary to the naked eye. The validity of our model and difference metric is again confirmed in a study.
international conference on computer graphics and interactive techniques | 2013
Desai Chen; David I. W. Levin; Piotr Didyk; Pitchaya Sitthi-Amorn; Wojciech Matusik
Multi-material 3D printing allows objects to be composed of complex, heterogenous arrangements of materials. It is often more natural to define a functional goal than to define the material composition of an object. Translating these functional requirements to fabri-cable 3D prints is still an open research problem. Recently, several specific instances of this problem have been explored (e.g., appearance or elastic deformation), but they exist as isolated, monolithic algorithms. In this paper, we propose an abstraction mechanism that simplifies the design, development, implementation, and reuse of these algorithms. Our solution relies on two new data structures: a reducer tree that efficiently parameterizes the space of material assignments and a tuner network that describes the optimization process used to compute material arrangement. We provide an application programming interface for specifying the desired object and for defining parameters for the reducer tree and tuner network. We illustrate the utility of our framework by implementing several fabrication algorithms as well as demonstrating the manufactured results.
international conference on computer graphics and interactive techniques | 2012
Piotr Didyk; Tobias Ritschel; Elmar Eisemann; Karol Myszkowski; Hans-Peter Seidel; Wojciech Matusik
Binocular disparity is one of the most important depth cues used by the human visual system. Recently developed stereo-perception models allow us to successfully manipulate disparity in order to improve viewing comfort, depth discrimination as well as stereo content compression and display. Nonetheless, all existing models neglect the substantial influence of luminance on stereo perception. Our work is the first to account for the interplay of luminance contrast (magnitude/frequency) and disparity and our model predicts the human response to complex stereo-luminance images. Besides improving existing disparity-model applications (e.g., difference metrics or compression), our approach offers new possibilities, such as joint luminance contrast and disparity manipulation or the optimization of auto-stereoscopic content. We validate our results in a user study, which also reveals the advantage of considering luminance contrast and its significant impact on disparity manipulation techniques.
eurographics | 2008
Piotr Didyk; Rafal Mantiuk; Matthias Hein; Hans-Peter Seidel
To utilize the full potential of new high dynamic range (HDR) displays, a system for the enhancement of bright luminous objects in video sequences is proposed. The system classifies clipped (saturated) regions as lights, reflections or diffuse surfaces using a semi‐automatic classifier and then enhances each class of objects with respect to its relative brightness. The enhancement algorithm can significantly stretch the contrast of clipped regions while avoiding amplification of noise and contouring. We demonstrate that the enhanced video is strongly preferred to non‐enhanced video, and it compares favorably to other methods.
international conference on computer graphics and interactive techniques | 2010
Piotr Didyk; Elmar Eisemann; Tobias Ritschel; Karol Myszkowski; Hans-Peter Seidel
Limited spatial resolution of current displays makes the depiction of very fine spatial details difficult. This work proposes a novel method applied to moving images that takes into account the human visual system and leads to an improved perception of such details. To this end, we display images rapidly varying over time along a given trajectory on a high refresh rate display. Due to the retinal integration time the information is fused and yields apparent super-resolution pixels on a conventional-resolution display. We discuss how to find optimal temporal pixel variations based on linear eye-movement and image content and extend our solution to arbitrary trajectories. This step involves an efficient method to predict and successfully treat potentially visible flickering. Finally, we evaluate the resolution enhancement in a perceptual study that shows that significant improvements can be achieved both for computer generated images and photographs.
vision modeling and visualization | 2010
Piotr Didyk; Tobias Ritschel; Elmar Eisemann; Karol Myszkowski; Hans-Peter Seidel
Stereo vision is becoming increasingly popular in feature films, visualization and interactive applications such as computer games. However, computation costs are doubled when rendering an individual image for each eye. In this work, we propose to only render a single image, together with a depth buffer and use image-based techniques to generate two individual images for the left and right eye. The resulting method computes a high-quality stereo pair for roughly half the cost of the traditional methods. We achieve this result via an adaptive-grid warping that also involves information from previous frames to avoid artifacts.
international conference on computer graphics and interactive techniques | 2013
Piotr Didyk; Pitchaya Sitthi-Amorn; William T. Freeman; Wojciech Matusik
Multi-view autostereoscopic displays provide an immersive, glasses-free 3D viewing experience, but they require correctly filtered content from multiple viewpoints. This, however, cannot be easily obtained with current stereoscopic production pipelines. We provide a practical solution that takes a stereoscopic video as an input and converts it to multi-view and filtered video streams that can be used to drive multi-view autostereoscopic displays. The method combines a phase-based video magnification and an interperspective antialiasing into a single filtering process. The whole algorithm is simple and can be efficiently implemented on current GPUs to yield a near real-time performance. Furthermore, the ability to retarget disparity is naturally supported. Our method is robust and works well for challenging video scenes with defocus blur, motion blur, transparent materials, and specularities. We show that our results are superior when compared to the state-of-the-art depth-based rendering methods. Finally, we showcase the method in the context of a real-time 3D videoconferencing system that requires only two cameras.
Computer Graphics Forum | 2010
Piotr Didyk; Elmar Eisemann; Tobias Ritschel; Karol Myszkowski; Hans-Peter Seidel
High‐refresh‐rate displays (e. g., 120 Hz) have recently become available on the consumer market and quickly gain on popularity. One of their aims is to reduce the perceived blur created by moving objects that are tracked by the human eye. However, an improvement is only achieved if the video stream is produced at the same high refresh rate (i. e. 120 Hz). Some devices, such as LCD TVs, solve this problem by converting low‐refresh‐rate content (i. e. 50 Hz PAL) into a higher temporal resolution (i. e. 200 Hz) based on two‐dimensional optical flow.
IEEE Transactions on Visualization and Computer Graphics | 2017
David E. Dunn; Cary Tippets; Kent Torell; Petr Kellnhofer; Kaan Akşit; Piotr Didyk; Karol Myszkowski; David Luebke; Henry Fuchs
Accommodative depth cues, a wide field of view, and ever-higher resolutions all present major hardware design challenges for near-eye displays. Optimizing a design to overcome one of these challenges typically leads to a trade-off in the others. We tackle this problem by introducing an all-in-one solution — a new wide field of view, gaze-tracked near-eye display for augmented reality applications. The key component of our solution is the use of a single see-through, varifocal deformable membrane mirror for each eye reflecting a display. They are controlled by airtight cavities and change the effective focal power to present a virtual image at a target depth plane which is determined by the gaze tracker. The benefits of using the membranes include wide field of view (100° diagonal) and fast depth switching (from 20 cm to infinity within 300 ms). Our subjective experiment verifies the prototype and demonstrates its potential benefits for near-eye see-through displays.
international conference on computer graphics and interactive techniques | 2011
Francesco Banterle; Alessandro Artusi; Tunç Ozan Aydin; Piotr Didyk; Elmar Eisemann; Diego Gutierrez; Rafal Mantiuk; Karol Myszkowski
Retargeting refers to the process by which an image or video is adapted from the display device for which it was meant (target display) to another one (retarget display). The retarget display has different features from the target one such as dynamic range, discretization levels, color gamut, multi-view, and refresh rate spatial resolution. This is a very relevant topic in graphics, given the increasing number of display devices from large, high-contrast screens to small cell phones with limited dynamic range; a lot of techniques are being published in different venues, and its hard to keep up. For most cases retargeting can be an ill-posed problem, for example in the process of displaying Low Dynamic Range (LDR) or 8-bit content on High Dynamic Range (HDR) displays. Such a problem requires the retargeting algorithm to generate new content which is missing in the input image/frame. In this course, we will present the latest solutions and techniques for retargeting images along various dimensions such as dynamic range, colors, temporal and spatial resolutions, and for the first time offer a much-needed holistic view of the field. Moreover, we are going to show how to measure and analyze the changes applied to an image or video in terms of quality using both psychophysical experiments (subjective) and computational metrics (objective). The course should be of interest to anyone involved in graphics in a broader sense, given the almost unavoidable need to retarget results to different devices -from developers interested in implementing retargeting techniques, to users that just need an overall perspective. For researchers fully engaged in developing multi-dimensional retargeting techniques, this course will serve as a solid background for future algorithms.