Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ryan White is active.

Publication


Featured researches published by Ryan White.


computer vision and pattern recognition | 2004

Names and faces in the news

Tamara L. Berg; Alexander C. Berg; Jaety Edwards; Michael Maire; Ryan White; Yee Whye Teh; Erik G. Learned-Miller; David A. Forsyth

We show quite good face clustering is possible for a dataset of inaccurately and ambiguously labelled face images. Our dataset is 44,773 face images, obtained by applying a face finder to approximately half a million captioned news images. This dataset is more realistic than usual face recognition datasets, because it contains faces captured in the wild in a variety of configurations with respect to the camera, taking a variety of expressions, and under illumination of widely varying color. Each face image is associated with a set of names, automatically extracted from the associated caption. Many, but not all such sets contain the correct name. We cluster face images in appropriate discriminant coordinates. We use a clustering procedure to break ambiguities in labelling and identify incorrectly labelled faces. A merging procedure then identifies variants of names that refer to the same individual. The resulting representation can be used to label faces in news images or to organize news pictures by individuals present. An alternative view of our procedure is as a process that cleans up noisy supervised data. We demonstrate how to use entropy measures to evaluate such procedures.


international conference on computer graphics and interactive techniques | 2007

Capturing and animating occluded cloth

Ryan White; Keenan Crane; David A. Forsyth

We capture the shape of moving cloth using a custom set of color markers printed on the surface of the cloth. The output is a sequence of triangle meshes with static connectivity and with detail at the scale of individual markers in both smooth and folded regions. We compute markers coordinates in space using correspondence across multiple synchronized video cameras. Correspondence is determined from color information in small neighborhoods and refined using a novel strain pruning process. Final correspondence does not require neighborhood information. We use a novel data driven hole-filling technique to fill occluded regions. Our results include several challenging examples: a wrinkled shirt sleeve, a dancing pair of pants, and a rag tossed onto a cup. Finally, we demonstrate that cloth capture is reusable by animating a pair of pants using human motion capture data.


computer vision and pattern recognition | 2007

Transfer Learning in Sign language

Ali Farhadi; David A. Forsyth; Ryan White

We build word models for American Sign Language (ASL) that transfer between different signers and different aspects. This is advantageous because one could use large amounts of labelled avatar data in combination with a smaller amount of labelled human data to spot a large number of words in human data. Transfer learning is possible because we represent blocks of video with novel intermediate discriminative features based on splits of the data. By constructing the same splits in avatar and human data and clustering appropriately, our features are both discriminative and semantically similar: across signers similar features imply similar words. We demonstrate transfer learning in two scenarios: from avatar to a frontally viewed human signer and from an avatar to human signer in a 3/4 view.


computer vision and pattern recognition | 2006

Combining Cues: Shape from Shading and Texture

Ryan White; David A. Forsyth

We demonstrate a method for reconstructing the shape of a deformed surface from a single view. After decomposing an image into irradiance and albedo components, we combine normal cues from shading and texture to produce a field of unambiguous normals. Using these normals, we reconstruct the 3D geometry. Our method works in two regimes: either requiring the frontal appearance of the texture or building it automatically from a series of images of the deforming texture. We can recover geometry with errors below four percent of object size on arbitrary textures, and estimate specific geometric parameters using a custom texture even more accurately.


american control conference | 2001

Autonomous following lateral control of heavy vehicles using laser scanning radar

Ryan White; Masayoshi Tomizuka

Describes and compares some simple, easily applied solutions to the heavy vehicle autonomous following lateral control problem using a laser scanning radar unit. When mounted on the front of the following tractor, the sensor gives the relative displacement of the preceding trailer and the relative yaw between the following tractor and preceding trailer (although the ability to measure small relative yaw angles is quite limited). Given two vehicles within a platoon, there are two possible methods for interpolating a trajectory. One is linear and the other assumes constant curvature. Using these methods, the performance when tracking the preceding trailer with the following tractors CG is compared to the case when the following tractor uses a measurement of the preceding tractor position as the reference. The results of simulations show the importance of using the same reference point on the preceding vehicle as the regulation point on the following vehicle.


north american chapter of the association for computational linguistics | 2003

Words and pictures in the news

Jaety Edwards; Ryan White; David A. Forsyth

We discuss the properties of a collection of news photos and captions, collected from the Associated Press and Reuters. Captions have a vocabulary dominated by proper names. We have implemented various text clustering algorithms to organize these items by topic, as well as an iconic matcher that identifies articles that share a picture. We have found that the special structure of captions allows us to extract some names of people actually portrayed in the image quite reliably, using a simple syntactic analysis. We have been able to build a directory of face images of individuals from this collection.


european conference on computer vision | 2006

Retexturing single views using texture and shading

Ryan White; David A. Forsyth

We present a method for retexturing non-rigid objects from a single viewpoint. Without reconstructing 3D geometry, we create realistic video with shape cues at two scales. At a coarse scale, a track of the deforming surface in 2D allows us to erase the old texture and overwrite it with a new texture. At a fine scale, estimates of the local irradiance provide strong cues of fine scale structure in the actual lighting environment. Computing irradiance from explicit correspondence is difficult and unreliable, so we limit our reconstructions to screen printing — a common printing techniques with a finite number of colors. Our irradiance estimates are computed in a local manner: pixels are classified according to color, then irradiance is computed given the color. We demonstrate results in two situations: on a special shirt designed for easy retexturing and on natural clothing with screen prints. Because of the quality of the results, we believe that this technique has wide applications in special effects and advertising.


international conference on computer graphics and interactive techniques | 2005

Cloth capture

Ryan White; Anthony Lobay; David A. Forsyth

We present a method for capturing the geometry and parameterization of fastmoving cloth using multiple video cameras, without requiring camera calibration. Our cloth is printed with a multiscale pattern that allows capture at both high speed and high spatial resolution even though self-occlusion might block any individual camera from seeing the majority of the cloth. We show how to incorporate knowledge of this pattern into conventional structure from motion approaches, and use a novel scheme for camera calibration using the pattern, derived from the shape from texture literature. By combining strain minimization with the point reconstruction we produce visually appealing cloth sequences. We demonstrate our algorithm by capturing, retexturing and displaying several sequences of fast


american control conference | 2002

Estimating relative position and yaw with laser scanning radar using probabilistic data association

Ryan White; Masayoshi Tomizuka

Many vehicle following applications require that the relative position and sometimes yaw between vehicles be measured by the following vehicle. Typically, vision and radar systems are used to obtain the relative position, and, while relative yaw can be measured, the accuracy may sometimes be unacceptable. The focus of this paper is on robust, accurate estimation of target position and yaw relative to the sensor of interest, a laser scanning radar (LIDAR) sensor. A probabilistic data association algorithm, developed by Bar-Shalom (1978) for standard radar sensors, is adapted for use with the LIDAR sensor and for estimation of the relative yaw. Computational concerns for real-time implementation necessitate the use of various pre-filtering and filter restructuring techniques. Tests of the algorithm on actual LIDAR data recorded outdoors show the exceptional performance of the estimator.


international conference on computer graphics and interactive techniques | 2007

Data driven cloth animation

Ryan White; Keenan Crane; David A. Forsyth

We present a new method for cloth animation based on data driven synthesis. In contrast to approaches that focus on physical simulation, we animate cloth by manipulating short sequences of existing cloth animation. While our source of data is cloth animation captured using video cameras ([White et al. 2007]), the method is equally applicable to simulation data. The approach has benefits in both cases: current cloth capture is limited because small tweaks to the data require filming an entirely new sequence. Likewise, simulation suffers from long computation times and complications such as tangling. In this sketch we create new animations by fitting cloth animation to human motion capture data, i.e., we drive the cloth with a skeleton.

Collaboration


Dive into the Ryan White's collaboration.

Top Co-Authors

Avatar

Jaety Edwards

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erik G. Learned-Miller

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Maire

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew J. Bonham

Metropolitan State University of Denver

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keenan Crane

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge