Ql Luat Do
Eindhoven University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ql Luat Do.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2009
Ql Luat Do; S Sveta Zinger; Y Yanninck Morvan
This paper evaluates our 3D view interpolation rendering algorithm and proposes a few performance improving techniques. We aim at developing a rendering method for free-viewpoint 3DTV, based on depth image warping from surrounding cameras. The key feature of our approach is warping texture and depth in the first stage simultaneously and postpone blending the new view to a later stage, thereby avoiding errors in the virtual depth map. We evaluate the rendering quality in two ways. Firstly, it is measured by varying the distance between the two nearest cameras. We have obtained a PSNR gain of 3 dB and 4.5 dB for the ‘Breakdancers’ and ‘Ballet’ sequences, respectively, compared to the performance of a recent algorithm. A second series of tests in measuring the rendering quality were performed using compressed video or images from surrounding cameras. The overall quality of the system is dominated by rendering quality and not by coding.
IEEE Transactions on Circuits and Systems for Video Technology | 2012
S Sveta Zinger; Dsa Daniel Ruijters; Ql Luat Do
We present an approach for efficient rendering and transmitting views to a high-resolution autostereoscopic display for medical purposes. Displaying biomedical images on an autostereoscopic display poses different requirements than in a consumer case. For medical usage, it is essential that the perceived image represents the actual clinical data and offers sufficiently high quality for diagnosis or understanding. Autostereoscopic display of multiple views introduces two hurdles: transmission of multi-view data through a bandwidth-limited channel and the computation time of the volume rendering algorithm. We address both issues by generating and transmitting limited set of views enhanced with a depth signal per view. We propose an efficient view interpolation and rendering algorithm at the receiver side based on texture+depth data representation, which can operate with a limited amount of views. We study the main artifacts that occur during rendering-occlusions, and we quantify them first for a synthetic model and then for real-world biomedical data. The experimental results allow us to quantify the peak signal-to-noise ratio for rendered texture and depth as well as the amount of disoccluded pixels as a function of the angle between surrounding cameras.
international conference on image processing | 2012
L Lingni Ma; Ql Luat Do
Free-Viewpoint Video (FVV) is a novel technique which creates virtual images of multiple direction by view synthesis. In this paper, an exemplar-based depth-guided inpainting algorithm is proposed to fill disocclusions due to uncovered areas after projection. We develop an improved priority function which uses the depth information to impose a desirable inpainting order. We also propose an efficient background-foreground separation technique to enhance the accuracy of hole filling. Furthermore, a gradient-based searching approach is developed to reduce the computational cost and the location distance is incorporated into patch matching criteria to improve the accuracy. The experimental results have shown that the gradient-based search in our algorithm requires a much lower computational cost (factor of 6 compared to global search), while producing significantly improved visual results.
international conference on multimedia and expo | 2010
Ql Luat Do; S Sveta Zinger
This paper presents our ongoing research on view synthesis of free-viewpoint 3D multi-view video for 3DTV. With the emerging breakthrough of stereoscopic 3DTV, we have extended a reference free-viewpoint rendering algorithm to generate stereoscopic views. Two similar solutions for converting free-viewpoint 3D multi-view video into a stereoscopic vision have been developed. These solutions take into account the complexity of the algorithms by exploiting the redundancy in stereo images, since we aim at a real-time hardware implementation. Both solutions are based on applying a horizontal shift instead of double execution of the reference free-viewpoint rendering algorithm for stereo generation (FVP stereo generation), so that the rendering time can be reduced by as much as 30–40 %. The trade-off however, is that the rendering quality is 0.5–0.9 dB lower than when applying FVP stereo generation. Our results show that stereoscopic views can be efficiently generated from 3D multi-view video by using unique properties in stereoscopic views, such as identical orientation, similarities in textures and small baseline.
international conference on image processing | 2010
Ql Luat Do; S Sveta Zinger
Interactive free-viewpoint selection applied to a 3D multi-view video signal is an attractive feature of the rapidly developing 3DTV media. In recent years, significant research has been done on free-viewpoint rendering algorithms which mostly have similar building blocks. In this paper, we analyze the four principal building blocks of most recent rendering algorithms and their contribution to the overall rendering quality. We have discovered that the rendering quality is dominated by the first step, Warping which determines the basic quality level of the complete rendering chain. The third step, Blending, is a valuable step which further increases the rendering quality by as much as 1.4 dB while reducing the disocclusions to less than 1% of the total image. Varying the angle between the two reference cameras, we notice that the quality of each principal building block degrades with a similar rate, 0.1–0.3dB/degree for real-life sequences. While experimenting with synthetic data of higher accuracy, we conclude that for developing better free-viewpoint algorithms, it is necessary to generate depth maps with more quantization levels so that the Warping and Blending steps can further contribute to the quality enhancement.
Proceedings of SPIE | 2011
Ql Luat Do; S Sveta Zinger
Interactive free-viewpoint selection applied to a 3D multi-view video signal is an attractive feature of the rapidly developing 3DTV media. In recent years, significant research has been done on free-viewpoint rendering algorithms which mostly have similar building blocks. In our previous work, we have analyzed the principal building blocks of most recent rendering algorithms and their contribution to the overall rendering quality. We have discovered that the first step, Warping determines the basic quality level of the complete rendering chain. In this paper, we have analyzed the warping step in more detail since it leads to ways for improvement. We have observed that the accuracy of warping is mainly determined by two factors which are sampling and rounding errors when performing pixel-based warping and quantization errors of depth maps. For each error factor, we have proposed a technique that can reduce the errors and thus increase the warping quality. Pixel-based warping errors are reduced by employing supersampling at the reference and virtual images and we decrease depth map errors by creating depth maps with more quantization levels. The new techniques are evaluated with two series of experiments using real-life and synthetic data. From these experiments, we have observed that reducing warping errors may increases the overall rendering quality and that the impact of errors due to pixel-based warping is much larger than that of errors due to depth quantization.
Neurocomputing | 2014
Dsa Daniel Ruijters; S Sveta Zinger; Ql Luat Do
Autostereoscopic visualization in clinical image-guided interventions and therapy poses constrains on the maximal latency in the visualization chain. In order to use visual feedback in the hand-eye coordinated loop, the latency of the visualization chain should be less than approximately 250-300ms. This is a challenging constraint for interactive autostereoscopic multi-view volume rendering of large datasets. The various building blocks of such a visualization chain and their latency aspects are explored in this paper. Two complementary strategies to improve the latency of the entire chain are introduced and examined: lowering the resolution of the rendered views and reducing the amount of rendered views by smartly interpolating the missing views at receiver side. These strategies are balanced while optimizing the latency and image quality using a fuzzy logic approach. Furthermore, the optimal view resolution for a multi-view autostereoscopic lenticular display has been determined by investigating the lenticular lattice in the frequency domain. The quantitative aspects of the latency of the proposed building blocks and the resulting image quality have been measured.
Journal of Visual Communication and Image Representation | 2010
S Sveta Zinger; Ql Luat Do
3d Research | 2012
S Sveta Zinger; Ql Luat Do
Handbook of Digital Imaging | 2015
Ql Luat Do; S Sveta Zinger