Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Isidoro is active.

Publication


Featured researches published by John Isidoro.


computer vision and pattern recognition | 1998

Head tracking via robust registration in texture map images

M. La Cascia; John Isidoro; Stan Sclaroff

A novel method for 3D head tracking in the presence of large head rotations and facial expression changes is described. Tracking is formulated in terms of color image registration in the texture map of a 3D surface model. Model appearance is recursively updated via image mosaicking in the texture map as the head orientation varies. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. Parameters are estimated via a robust minimization procedure; this provides robustness to occlusions, wrinkles, shadows and specular highlights. The system was tested on a variety of sequences taken with low quality, uncalibrated video cameras. Experimental results are reported.


Computer Vision and Image Understanding | 2003

Active blobs: region-based, deformable appearance models

Stan Sclaroff; John Isidoro

A region-based approach to nonrigid motion tracking is described. Shape is defined in terms of a deformable triangular mesh that captures object shape plus a color texture map that captures object appearance. Photometric variations are also modeled. Nonrigid shape registration and motion tracking are achieved by posing the problem as an energy-based, robust minimization procedure. The approach provides robustness to occlusions, wrinkles, shadows, and specular highlights. The formulation is tailored to take advantage of texture mapping hard-ware available in many workstations, PCs, and game consoles. This enables nonrigid tracking at speeds approaching video rate.


international symposium on 3d data processing visualization and transmission | 2002

Stochastic mesh-based multiview reconstruction

John Isidoro; Stanley E. Sclaroff

A method for reconstruction of 3D polygonal models from multiple views is presented. The method uses sampling techniques to construct a texture-mapped semiregular polygonal mesh of the object in question. Given a set of views and segmentation of the object in each view, constructive solid geometry is used to build a visual hull from silhouette prisms. The resulting polygonal mesh is simplified and subdivided to produce a semiregular mesh. Regions of model fit inaccuracy are found by projecting the reference images onto the mesh from different views. The resulting error images for each view are used to compute a probability density function, and several points are sampled from it. Along the epipolar lines corresponding to these sampled points, photometric consistency is evaluated. The mesh surface is then pulled towards the regions of higher photometric consistency using free-form deformations. This sampling-based approach produces a photometrically consistent solution in much less time than possible with previous multi-view algorithms given arbitrary camera placement.


international conference on computer graphics and interactive techniques | 2002

User customizable real-time fur

John Isidoro; Jason L. Mitchell

Recent advances in real-time fur rendering have enabled the development of more realistic furry characters. In this sketch, we outline a number of advances to the shell and fin based fur rendering technique by Lengyel et al [2001] using the pixel and vertex shader capabilities of modern 3D hardware.


international conference on computer graphics and interactive techniques | 2006

The real-time reprojection cache

Diego Nehab; Pedro V. Sander; John Isidoro

We describe a simple and inexpensive method that uses stock graphics hardware to cache and track surface information through time. Cached information is stored in frame-buffers, thereby avoiding complex data-structures and bus traffic. When a new frame is rendered, an efficient reprojection method gives each new pixel access to information computed during previous frames.This idea can be used to adapt a variety of real-time rendering techniques to efficiently exploit spatio-temporal coherence. When applications are pixel bound, the cached algorithms show significant cost and/or quality improvements over their plain counterparts, at virtually no extra implementation overhead.


eurographics | 2006

Artist-directable real-time rain rendering in city environments

Natalya Tatarchuk; John Isidoro

Photorealistic rain greatly enhances the scenes of outdoor reality, with applications including computer games and motion pictures. Rain is a complex atmospheric natural phenomenon. It consists of numerous interacting visual effects. We present a comprehensive real-time system for the realistic rendering of rain effects in complex environments in real-time. Our system is intuitive, flexible and provides a high degree of artistic control for achieving the desired look. We describe a number of novel GPU-based algorithms for rendering the individual components of rain effects, such as a hybrid system of an image-space approach for rainfall and the particle-based effects for dripping raindrops and splashes; water surface simulation for ripples; animation and rendering of water droplets trickling down on transparent glass panes; view-dependent warped reflections and a number of additional effects. All our techniques respond dynamically and correctly to the environment lighting and viewpoint changes as well as the atmospheric illumination due to lightning. Our effects can be rendered at interactive rates on consumer graphics hardware and can be easily integrated into existing game and interactive application pipelines or offline rendering.


Proceedings Computer Animation '98 (Cat. No.98EX169) | 1998

Active voodoo dolls: a vision based input device for nonrigid control

John Isidoro; Stan Sclaroff

A vision based technique for nonrigid control is presented that can be used for animation and video game applications. The user grasps a soft, squishable object in front of a camera that can be moved and deformed in order to specify motion. Active Blobs, a nonrigid tracking technique is used to recover the position, rotation and nonrigid deformations of the object. The resulting transformations can be applied to a texture mapped mesh, thus allowing the user to control it interactively. Our use of texture mapping hardware in tracking makes the system responsive enough for interactive animation and video game character control.


international conference on computer graphics and interactive techniques | 2005

Angular extent filtering with edge fixup for seamless cubemap filtering

John Isidoro; Jason L. Mitchell

Despite the fact that cube maps are defined on the spherical domain, standard cubemap filtering techniques perform filtering independently on each cube face. The main problem with this approach is that no information is propagated across edges, thus creating undesirable discontinuities along the cube face edges. A limitation of nearly all cubemapping hardware which makes the seam problem substantially worse is the fact that the bilinear texel filtering is not able to fetch across cube faces thus producing a hard seam artifact. The seam problem also causes aliasing artifacts. These two compounding problems limit the usefulness of cubemapping.


international conference on computer graphics and interactive techniques | 2006

Animated skybox rendering and lighting techniques

John Isidoro; Pedro V. Sander

Figure 1. A real-time rendering of the Parthenon. In this chapter we briefly describe techniques used represent and render the high dynamic range (HDR) time-lapse sky imagery in the real-time Parthenon demo (Figure 1). These methods, along with several other rendering techniques, achieve real-time frame-rates using the latest generation of graphics hardware.


international conference on computer graphics and interactive techniques | 2007

Combining computer vision and physics simulations using GPGPU

Justin Hensley; John Isidoro; Arcot J. Preetham

We present a system that uses the immense processing capabilities of graphics processors (GPUs) to enable a computer vision algorithm, such as stereo depth extraction, to drive a physics simulation in an interactive environment. This combination of processing has the potential to dramatically alter the way that people interact with computers through novel user interfaces and in interactive gaming.

Collaboration


Dive into the John Isidoro's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pedro V. Sander

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Diego Nehab

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge