Judit Martinez Bauza
Qualcomm
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Judit Martinez Bauza.
international conference on multimedia and expo | 2011
Mashhour Solh; Ghassan AlRegib; Judit Martinez Bauza
In this paper, we present a new method for objectively evaluating the quality of stereoscopic 3D videos generated by depth-image-based rendering (DIBR). First we show how to derive an ideal depth estimate at each pixel value that would constitute a distortion-free rendered video. The ideal depth estimate will then be used to derive three distortion measures to objectify the visual discomfort in the stereoscopic videos. The three measures are temporal outliers (TO), temporal inconsistencies (TI), and spatial outliers (SO). The combination of the three measures will constitute a vision-based quality measure for 3D DIBR-based videos, 3VQM. Finally, 3VQM will be presented and verified against a fully conducted subjective evaluation. The results show that our proposed measure is significantly accurate, coherent and consistent with the subjective scores.
international conference on acoustics, speech, and signal processing | 2010
Srenivas Varadarajan; Chaitali Chakrabarti; Lina J. Karam; Judit Martinez Bauza
This paper proposes a distributed Canny edge detection algorithm which can be mapped onto multi-core architectures for high throughput applications. In contrast to the conventional Canny edge detection algorithm which makes use of the global image gradient histogram to determine the threshold for edge detection, the proposed algorithm adaptively computes the edge detection threshold based on the local distribution of the gradients in the considered image block. The efficacy of the distributed Canny in detecting psycho-visually important edges is validated using a visual sharpness metric. The proposed distributed Canny edge detection algorithm has the capacity to scale up the throughput adaptively, based on the number of computing engines. The algorithm achieves about 72 times speed up for a 16-core architecture, without any change in performance. Furthermore, the internal memory requirements are significantly reduced especially for smaller block sizes. For instance, if a 512×512 image is processed in 64×64 blocks using the proposed scheme, the memory is reduced by a factor of 70 as compared to the original Canny edge detector.
electronic imaging | 2015
Peter D. Burns; Judit Martinez Bauza
Objective evaluation of digital image quality usually includes analysis of spatial detail in captured images. Although previously-developed methods and standards have found success in the evaluation of system performance, the systems in question usually include spatial image processing (e.g. sharpening or noise-reduction), and the results are influenced by these operations. Our interest, however, is in the intrinsic resolution of the system. By this we mean the performance primarily defined by the lens and imager, and not influenced by subsequent image processing steps that are invertible. Examples of such operations are brightness and contrast adjustments, and simple sharpening and blurring (setting aside image clipping and quantization). While these operations clearly modify image perception, they do not in general change the fundamental spatial image information that is captured. We present a method to measure an intrinsic spatial frequency response (SFR) computed from test image(s) for which spatial operations may have been applied. The measure is intended ‘see through’ operations for which image detail is retrievable but measure the loss of image resolution otherwise. We adopt a two-stage image capture model. The first stage includes a locally-stable point-spread function (lens), the integration and sampling by the detector (imager), and the introduction of detector noise. The second stage comprises the spatial image processing. We describe the validation of the method, which was done using both simulation and actual camera evaluations.
Proceedings of SPIE | 2011
Judit Martinez Bauza; Manish Shiralkar
This paper describes an algorithm for estimating the disparity between 2 images of a stereo pair. The disparity is related to the depth of the objects in the scene. Being able to obtain the depth of the objects in the scene is useful in many applications such as virtual reality, 3D user interfaces, background-foreground segmentation, or depth-image-based synthesis. This last application has motivated the proposed algorithm as part of a system that estimates disparities from a stereo pair and synthesizes new views. Synthesizing virtual views enables the post-processing of 3D content to adapt to user preferences or viewing conditions, as well as enabling the interface with multi-view auto-stereoscopic displays. The proposed algorithm has been designed to fulfill the following constraints: (a) low memory requirements, (b) local and parallelizable processing, and (c) adaptability to a sudden reduction in processing resources. Our solution uses a multi-resolution multi-size-windows approach, implemented as a line-independent process, well-suited for GPU implementation. The multi-resolution approach provides adaptability to sudden reduction in processing capabilities, besides computational advantages; the windows-based image processing algorithm guarantees low-memory requirements and local processing.
Archive | 2010
Vijayalakshmi R. Raveendran; Judit Martinez Bauza; Samir S. Soliman
Archive | 2010
Judit Martinez Bauza; Vijayalakshmi R. Raveendran
Archive | 2010
Vijayalakshmi R. Raveendran; Judit Martinez Bauza; Samir S. Soliman
Archive | 2010
Judit Martinez Bauza; E. Kilpatrick Ii Thomas; Sten Jorgen Ludvig Dahl
Archive | 2011
Judit Martinez Bauza; Samir S. Soliman; Soham V. Sheth; Xun Luo; Vijayalakshmi R. Raveendran; Phanikumar Bhamidipati
Archive | 2010
Vijayalakshmi R. Raveendran; Judit Martinez Bauza; Samir S. Soliman