Manuel M. Oliveira
Universidade Federal do Rio Grande do Sul
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Manuel M. Oliveira.
international conference on computer graphics and interactive techniques | 2011
Eduardo Simoes Lopes Gastal; Manuel M. Oliveira
We present a new approach for performing high-quality edge-preserving filtering of images and videos in real time. Our solution is based on a transform that defines an isometry between curves on the 2D image manifold in 5D and the real line. This transform preserves the geodesic distance between points on these curves, adaptively warping the input signal so that 1D edge-preserving filtering can be efficiently performed in linear time. We demonstrate three realizations of 1D edge-preserving filters, show how to produce high-quality 2D edge-preserving filters by iterating 1D-filtering operations, and empirically analyze the convergence of this process. Our approach has several desirable features: the use of 1D operations leads to considerable speedups over existing techniques and potential memory savings; its computational cost is not affected by the choice of the filter parameters; and it is the first edge-preserving filter to work on color images at arbitrary scales in real time, without resorting to subsampling or quantization. We demonstrate the versatility of our domain transform and edge-preserving filters on several real-time image and video processing tasks including edge-preserving filtering, depth-of-field effects, stylization, recoloring, colorization, detail enhancement, and tone mapping.
international conference on computer graphics and interactive techniques | 2000
Manuel M. Oliveira; Gary Bishop; David K. McAllister
We present an extension to texture mapping that supports the representation of 3-D surface details and view motion parallax. The results are correct for viewpoints that are static or moving, far away or nearby. Our approach is very simple: a relief texture (texture extended with an orthogonal displacement per texel) is mapped onto a polygon using a two-step process: First, it is converted into an ordinary texture using a surprisingly simple 1-D forward transform. The resulting texture is then mapped onto the polygon using standard texture mapping. The 1-D warping functions work in texture coordinates to handle the parallax and visibility changes that result from the 3-D shape of the displacement surface. The subsequent texture-mapping operation handles the transformation from texture to screen coordinates.
Pattern Recognition | 2008
Leandro A. F. Fernandes; Manuel M. Oliveira
The Hough transform (HT) is a popular tool for line detection due to its robustness to noise and missing data. However, the computational cost associated to its voting scheme has prevented software implementations to achieve real-time performance, except for very small images. Many dedicated hardware designs have been proposed, but such architectures restrict the image sizes they can handle. We present an improved voting scheme for the HT that allows a software implementation to achieve real-time performance even on relatively large images. Our approach operates on clusters of approximately collinear pixels. For each cluster, votes are cast using an oriented elliptical-Gaussian kernel that models the uncertainty associated with the best-fitting line with respect to the corresponding cluster. The proposed approach not only significantly improves the performance of the voting scheme, but also produces a much cleaner voting map and makes the transform more robust to the detection of spurious lines.
interactive 3d graphics and games | 2005
Fabio Policarpo; Manuel M. Oliveira; João Luiz Dihl Comba
We present a technique for mapping relief textures onto arbitrary polygonal models in real time, producing correct self-occlusions, interpenetrations, shadows and per-pixel lighting. The technique uses a pixel-driven formulation based on an efficient ray-height-field intersection implemented on the GPU. It has very low memory requirements, supports extreme close-up views of the surfaces and can be applicable to surfaces undergoing deformation.
Computer Graphics Forum | 2010
Eduardo Simoes Lopes Gastal; Manuel M. Oliveira
Image matting aims at extracting foreground elements from an image by means of color and opacity (alpha) estimation. While a lot of progress has been made in recent years on improving the accuracy of matting techniques, one common problem persisted: the low speed of matte computation. We present the first real‐time matting technique for natural images and videos. Our technique is based on the observation that, for small neighborhoods, pixels tend to share similar attributes. Therefore, independently treating each pixel in the unknown regions of a trimap results in a lot of redundant work. We show how this computation can be significantly and safely reduced by means of a careful selection of pairs of background and foreground samples. Our technique achieves speedups of up to two orders of magnitude compared to previous ones, while producing high‐quality alpha mattes. The quality of our results has been verified through an independent benchmark. The speed of our technique enables, for the first time, real‐time alpha matting of videos, and has the potential to enable a new class of exciting applications.
international conference on computer graphics and interactive techniques | 2012
Eduardo Simoes Lopes Gastal; Manuel M. Oliveira
We present a technique for performing high-dimensional filtering of images and videos in real time. Our approach produces high-quality results and accelerates filtering by computing the filters response at a reduced set of sampling points, and using these for interpolation at all N input pixels. We show that for a proper choice of these sampling points, the total cost of the filtering operation is linear both in N and in the dimension d of the space in which the filter operates. As such, ours is the first high-dimensional filter with such a complexity. We present formal derivations for the equations that define our filter, as well as for an algorithm to compute the sampling points. This provides a sound theoretical justification for our method and for its properties. The resulting filter is quite flexible, being capable of producing responses that approximate either standard Gaussian, bilateral, or non-local-means filters. Such flexibility also allows us to demonstrate the first hybrid Euclidean-geodesic filter that runs in a single pass. Our filter is faster and requires less memory than previous approaches, being able to process a 10-Megapixel full-color image at 50 fps on modern GPUs. We illustrate the effectiveness of our approach by performing a variety of tasks ranging from edge-aware color filtering in 5-D, noise reduction (using up to 147 dimensions), single-pass hybrid Euclidean-geodesic filtering, and detail enhancement, among others.
brazilian symposium on computer graphics and image processing | 2003
Jianning Wang; Manuel M. Oliveira
Creating models of real scenes is a complex task for which the use of traditional modelling techniques is inappropriate. For this task, laser rangefinders are frequently used to sample the scene from several viewpoints, with the resulting range images integrated into a final model. In practice, due to surface reflectance properties, occlusions and accessibility limitations, certain areas of the scenes are usually not sampled, leading to holes and introducing undesirable artifacts in the resulting models. We present an algorithm for filling holes on surfaces reconstructed from point clouds. The algorithm is based on moving least squares and can recover both geometry and shading information, providing a good alternative when the properties to be reconstructed are locally smooth. The reconstruction process is mostly automatic and the sampling rate in the reconstructed areas follows the given samples. We demonstrate the use of the algorithm on both real and synthetic data sets to obtain complete geometry and reasonable shading.
ACM Transactions on Graphics | 2009
Vitor Pamplona; Manuel M. Oliveira; Gladimir V. G. Baranoski
We introduce a physiologically-based model for pupil light reflex (PLR) and an image-based model for iridal pattern deformation. Our PLR model expresses the pupil diameter as a function of the lighting of the environment, and is described by a delay-differential equation, naturally adapting the pupil diameter even to abrupt changes in light conditions. Since the parameters of our PLR model were derived from measured data, it correctly simulates the actual behavior of the human pupil. Another contribution of our work is a model for realistic deformation of the iris pattern as a function of pupil dilation and constriction. Our models produce high-fidelity appearance effects and can be used to produce real-time predictive animations of the pupil and iris under variable lighting conditions. We assess the predictability and quality of our simulations through comparisons of modeled results against measured data derived from experiments also described in this work. Combined, our models can bring facial animation to new photorealistic standards.
interactive 3d graphics and games | 2006
Fabio Policarpo; Manuel M. Oliveira
The ability to represent non-height-field mesostructure details is of great importance for rendering complex surface patterns, such as weave and multilayer structures. Currently, such representations are based on the use of 3D textures or large datasets of sampled data. While some of the 3D-texture-based approaches can achieve interactive performance, all these approaches require large amounts of memory. We present a technique for mapping non-height-field structures onto arbitrary polygonal models in real time, which has low memory requirements. It generalizes the notion of relief mapping to support multiple layers. This technique can be used to render realistic impostors of 3D objects that can be viewed from close proximity and from a wide angular range. Contrary to traditional impostors, these new one-polygon representations can be observed from both sides, producing correct parallax and views that are consistent with the observation of the 3D geometry they represent.
IEEE Transactions on Visualization and Computer Graphics | 2008
Giovane R. Kuhn; Manuel M. Oliveira; Leandro A. F. Fernandes
We present an efficient and automatic image-recoloring technique for dichromats that highlights important visual details that would otherwise be unnoticed by these individuals. While previous techniques approach this problem by potentially changing all colors of the original image, causing their results to look unnatural to color vision deficients, our approach preserves, as much as possible, the images original colors. Our approach is about three orders of magnitude faster than previous ones. The results of a paired-comparison evaluation carried out with fourteen color-vision deficients (CVDs) indicated the preference of our technique over the state-of-the-art automatic recoloring technique for dichromats. When considering information visualization examples, the subjects tend to prefer our results over the original images. An extension of our technique that exaggerates color contrast tends to be preferred when CVDs compared pairs of scientific visualization images. These results provide valuable information for guiding the design of visualizations for color-vision deficients.
Collaboration
Dive into the Manuel M. Oliveira's collaboration.
Sandra Cristina Pereira Costa Fuchs
Universidade Federal do Rio Grande do Sul
View shared research outputs