Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Harlyn Baker is active.

Publication


Featured researches published by Harlyn Baker.


computer vision and pattern recognition | 1988

Generalizing epipolar-plane image analysis on the spatiotemporal surface

Harlyn Baker; Robert C. Bolles

The previous implementations of our Epipolar-Plane Image Analysis mapping technique demonstrated the feasibility and benefits of the approach, but were carried out for restricted camera geometries. The question of more general geometries made the techniques utility for autonomous navigation uncertain. We have developed a generalization of our analysis that (a) enables varying view direction, including variation over time (b) provides three-dimensional connectivity information for building coherent spatial descriptions of observed objects; and (c) operates sequentially, allowing initiation and refinement of scene feature estimates while the sensor is in motion. To implement this generalization it was necessary to develop an explicit description of the evolution of images over time. We have achieved this by building a process that creates a set of two-dimensional manifolds defined at the zeros of a three-dimensional spatiotemporal Laplacian. These manifolds represent explicitly both the spatial and temporal structure of the temporally evolving imagery, and we term them spatiotemporal surfaces. The surfaces are constructed incrementally, as the images are acquired. We describe a tracking mechanism that operates locally on these evolving surfaces in carrying out three-dimensional scene reconstruction.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2005

Understanding performance in coliseum, an immersive videoconferencing system

Harlyn Baker; Nina Bhatti; Donald Tanguay; Irwin Sobel; Dan Gelb; Michael E. Goss; W. Bruce Culbertson; Thomas Malzbender

Coliseum is a multiuser immersive remote teleconferencing system designed to provide collaborative workers the experience of face-to-face meetings from their desktops. Five cameras are attached to each PC display and directed at the participant. From these video streams, view synthesis methods produce arbitrary-perspective renderings of the participant and transmit them to others at interactive rates, currently about 15 frames per second. Combining these renderings in a shared synthetic environment gives the appearance of having all participants interacting in a common space. In this way, Coliseum enables users to share a virtual world, with acquired-image renderings of their appearance replacing the synthetic representations provided by more conventional avatar-populated virtual worlds. The system supports virtual mobility---participants may move around the shared space---and reciprocal gaze, and has been demonstrated in collaborative sessions of up to ten Coliseum workstations, and sessions spanning two continents.Coliseum is a complex software system which pushes commodity computing resources to the limit. We set out to measure the different aspects of resource, network, CPU, memory, and disk usage to uncover the bottlenecks and guide enhancement and control of system performance. Latency is a key component of Quality of Experience for video conferencing. We present how each aspect of the system---cameras, image processing, networking, and display---contributes to total latency. Performance measurement is as complex as the system to which it is applied. We describe several techniques to estimate performance through direct light-weight instrumentation as well as use of realistic end-to-end measures that mimic actual user experience. We describe the various techniques and how they can be used to improve system performance for Coliseum and other network applications. This article summarizes the Coliseum technology and reports on issues related to its performance---its measurement, enhancement, and control.


acm multimedia | 2003

Computation and performance issues In coliseum: an immersive videoconferencing system

Harlyn Baker; Nina Bhatti; Donald Tanguay; Irwin Sobel; Dan Gelb; Michael E. Goss; John MacCormick; Kei Yuasa; W. Bruce Culbertson; Thomas Malzbender

Coliseum is a multiuser immersive remote teleconferencing system designed to provide collaborative workers the experience of face-to-face meetings from their desktops. Five cameras are attached to each PC display and directed at the participant. From these video streams, view synthesis methods produce arbitrary-perspective renderings of the participant and transmit them to others at interactive rates, currently about 15 frames per second. Combining these renderings in a shared synthetic environment gives the appearance of having all participants interacting in a common space. In this way, Coliseum enables users to share a virtual world, with acquired-image renderings of their appearance replacing the synthetic representations provided by more conventional avatar-populated virtual worlds. The system supports virtual mobility--participants may move around the shared space--and reciprocal gaze, and has been demonstrated in collaborative sessions of up to ten Coliseum workstations, and sessions spanning two continents. This paper summarizes the technology, and reports on issues related to its performance.


International Journal of Imaging Systems and Technology | 2007

Assessing Human Skin Color from Uncalibrated Images

Joanna Marguier; Nina Bhatti; Harlyn Baker; Michael Harville; Sabine Süsstrunk

Images of a scene captured with multiple cameras will have different color values because of variations in color rendering across devices. We present a method to accurately retrieve color information from uncalibrated images taken under uncontrolled lighting conditions with an unknown device and no access to raw data, but with a limited number of reference colors in the scene. The method is used to assess skin tones. A subject is imaged with a calibration target. The target is extracted and its color values are used to compute a color correction transform that is applied to the entire image. We establish that the best mapping is done using a target consisting of skin colored patches representing the whole range of human skin colors. We show that color information extracted from images is well correlated with color data derived from spectral measurements of skin. We also show that skin color can be consistently measured across cameras with different color rendering and resolutions ranging from 0.1 to 4.0 megapixels.


Versus | 1998

Robust, real-time people tracking in open environments using integrated stereo, color, and face detection

Trevor Darrell; Gaile G. Gordon; John Iselin Woodfill; Harlyn Baker; Michael Harville

We present approach to robust real-time person tracking in crowded and/or unknown environments using multimodal integration. We combine stereo, color, and face detection modules into a single robust system, and show an initial application for an interactive display where the user sees his face distorted into various comic poses in real-time. Stereo processing is used to isolate the figure of a user from other objects and people in the background. Skin-hue classification identifies and tracks likely body parts within the foreground region, and face pattern detection discriminates and localizes the face within the tracked body parts. We discuss the failure modes of these individual components, and report results with the complete system in trials with thousands of users.


international conference on image processing | 2005

Consistent image-based measurement and classification of skin color

Michael Harville; Harlyn Baker; Nina Bhatti; Sabine Süsstrunk

Little prior image processing work has addressed estimation and classification of skin color in a manner that is independent of camera and illuminant. To this end, we first present new methods for 1) fast, easy-to-use image color correction, with specialization toward skin tones, and 2) fully automated estimation of facial skin color, with robustness to shadows, specularities, and blemishes. Each of these is validated independently against ground truth, and then combined with a classification method that successfully discriminates skin color across a population of people imaged with several different cameras. We also evaluate the effects of image quality and various algorithmic choices on our classification performance. We believe our methods are practical for relatively untrained operators, using inexpensive consumer equipment.


IMMERSCOM '09 Proceedings of the 2nd International Conference on Immersive Telecommunications | 2009

Camera and projector arrays for immersive 3D video

Harlyn Baker; Zeyu Li

Applying recent advances in multi-imager capture and multi-projector display, we combine capabilities through the Nizza multimedia dataflow architecture to deliver low-cost wide-VGA-quality low-latency autostereoscopic 3D display of live video on a single PC. Supporting multiple users as they observe and interact against a life-sized display surface responsive to their positions, this facility will open new opportunities in mediated interaction.


acm multimedia | 2006

A multi-imager camera for variable-definition video (XDTV)

Harlyn Baker; Donald Tanguay

The enabling technologies of increasing PC bus bandwidth, multicore processors, and advanced graphics processors combined with a high-perform-ance multi-image camera system are leading to new ways of considering video. We describe scalable varied-resolution video capture, presenting a novel method of generating multi-resolution dialable-shape panoramas, a line-based calibration method that achieves optimal multi-imager global registration across possibly disjoint views, and a technique for recasting mosaicking homographies for arbitrary planes. Results show synthesis of a 7.5 megapixel (MP) video stream from 22 synchronized uncompressed imagers operating at 30 Hz on a single PC.


human computer interaction with mobile devices and services | 2008

Color match: an imaging based mobile cosmetics advisory service

Jhilmil Jain; Nina Bhatti; Harlyn Baker; Hui Chao; Mohamed Dekhil; Michael Harville; Nic Lyons; John C. Schettino; Sabine Süsstrunk

In this paper we describe an exploratory study of a mobile cosmetic advisory system that enables women to select appropriate colors of cosmetics. This system is intended for commercial use to address the problem of foundation color selection. Although women are primarily responsible for making most purchasing decisions in the US, we found very few studies to assess the adoption of retail related mobile services by women. Based on surveys, semi-structured interviews, and focus groups, we have identified a number of design factors that should be considered when designing mobile services for women consumers. The results of our study indicate that while usefulness is an important factor, other design aspects such as mobile vs. kiosk, installed vs. existing software, technical comfort vs. social comfort, social vs. individual, privacy and trust should also be accounted for.


international conference on image processing | 2006

Automatic Skin Pixel Selection and Skin Color Classification

Sangho Yoon; Michael Harville; Harlyn Baker; Nina Bhatii

We describe an automatic method for classifying skin color, independent of lighting and imaging device characteristics, using consumer digital cameras and a simple color calibration target. After color normalization and face detection is performed, pixels of each face image are clustered in an unsupervised fashion. Pixels likely to be representative of skin color, rather than of distractors such as shadows, specularities, eyes, and lips, are identified by selecting the dominant clusters that have large number of pixels assigned per volume. A Gauss mixture model (GMM) of a persons skin color is formed from the pixels belonging to the selected clusters. When a set of exemplar images with skin color labels by an expert, we show that the label assigned by the same expert to a new, test face image can be predicted by comparison of the GMMs of the test image and the exemplars. Specifically, we use the label of the exemplar whose GMM has smallest KL divergence from that of the test image.

Collaboration


Dive into the Harlyn Baker's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sabine Süsstrunk

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zeyu Li

University of California

View shared research outputs
Top Co-Authors

Avatar

Joanna Marguier

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gaile G. Gordon

Interval Research Corporation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Iselin Woodfill

Interval Research Corporation

View shared research outputs
Researchain Logo
Decentralizing Knowledge