Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew Flagg is active.

Publication


Featured researches published by Matthew Flagg.


International Journal of Computer Vision | 2012

Motion Coherent Tracking Using Multi-label MRF Optimization

David Tsai; Matthew Flagg; Atsushi Nakazawa; James M. Rehg

We present a novel off-line algorithm for target segmentation and tracking in video. In our approach, video data is represented by a multi-label Markov Random Field model, and segmentation is accomplished by finding the minimum energy label assignment. We propose a novel energy formulation which incorporates both segmentation and motion estimation in a single framework. Our energy functions enforce motion coherence both within and across frames. We utilize state-of-the-art methods to efficiently optimize over a large number of discrete labels. In addition, we introduce a new ground-truth dataset, called Georgia Tech Segmentation and Tracking Dataset (GT-SegTrack), for the evaluation of segmentation accuracy in video tracking. We compare our method with several recent on-line tracking algorithms and provide quantitative and qualitative performance comparisons.


british machine vision conference | 2010

Motion Coherent Tracking with Multi-label MRF optimization.

David Tsai; Matthew Flagg; James M. Rehg

We present a novel off-line algorithm for target segmentation and tracking in video. In our approach, video data is represented by a multi-label Markov Random Field model, and segmentation is accomplished by finding the minimum energy label assignment. We propose a novel energy formulation which incorporates both segmentation and motion estimation in a single framework. Our energy functions enforce motion coherence both within and across frames. We utilize state-of-the-art methods to efficiently optimize over a large number of discrete labels. In addition, we introduce a new ground-truth dataset, called SegTrack, for the evaluation of segmentation accuracy in video tracking. We compare our method with two recent on-line tracking algorithms and provide quantitative and qualitative performance comparisons.


user interface software and technology | 2006

Projector-guided painting

Matthew Flagg; James M. Rehg

This paper presents a novel interactive system for guiding artists to paint using traditional media and tools. The enabling technology is a multi-projector display capable of controlling the appearance of an artists canvas. This display-on-canvas guides the artist to construct the painting as a series of layers. Our process model for painting is based on classical techniques and was designed to address three main issues which are challenging to novices: (1) positioning and sizing elements on the canvas, (2) executing the brushstrokes to achieve a desired texture and (3) mixing pigments to make a target color. These challenges are addressed through a set of interaction modes. Preview and color selection modes enable the artist to focus on the current target layer by highlighting the areas of the canvas to be painted. Orientation mode displays brushstroke guidelines for the creation of desired brush texture. Color mixing mode guides the artist through the color mixing process with a user interface similar to a color wheel. These interaction modes allow a novice artist to focus on a series of manageable subtasks in executing a complex painting. Our system covers the gamut of the painting process from overall composition down to detailed brushwork. We present the results from a user study which quantify the benefit that our system can provide to a novice painter.


computer vision and pattern recognition | 2004

A flexible projector-camera system for multi-planar displays

M. Ashdown; Matthew Flagg; Rahul Sukthankar; James M. Rehg

We present a novel multi-planar display system based on an uncalibrated projector-camera pair. Our system exploits the juxtaposition of planar surfaces in a room to create ad-hoc visualization and display capabilities. In an office setting, for example, a desk pushed against a wall provides two perpendicular surfaces that can simultaneously display elevation and plan views of an architectural model. In contrast to previous room-level projector-camera systems, our method is based on a flexible calibration procedure that requires a minimum amount of information for the geometry of the multi-planar surface scenario. A number of display configurations can be created on any available planar surfaces using a single commodity projector and camera. The key to our calibration approach is an efficient technique for simultaneously localizing multiple planes and a robust planar metric rectification method, which can tolerate a restricted camera field-of-view and requires no special calibration objects. We demonstrate the robustness of our calibration method using real and synthetic images and present several applications of our display system.


interactive 3d graphics and games | 2009

Human video textures

Matthew Flagg; Atsushi Nakazawa; Qiushuang Zhang; Sing Bing Kang; Young Kee Ryu; Irfan A. Essa; James M. Rehg

This paper describes a data-driven approach for generating photorealistic animations of human motion. Each animation sequence follows a user-choreographed path and plays continuously by seamlessly transitioning between different segments of the captured data. To produce these animations, we capitalize on the complementary characteristics of motion capture data and video. We customize our capture system to record motion capture data that are synchronized with our video source. Candidate transition points in video clips are identified using a new similarity metric based on 3-D marker trajectories and their 2-D projections into video. Once the transitions have been identified, a video-based motion graph is constructed. We further exploit hybrid motion and video data to ensure that the transitions are seamless when generating animations. Motion capture marker projections serve as control points for segmentation of layers and nonrigid transformation of regions. This allows warping and blending to generate seamless in-between frames for animation. We show a series of choreographed animations of walks and martial arts scenes as validation of our approach.


IEEE Transactions on Visualization and Computer Graphics | 2007

Shadow Elimination and Blinding Light Suppression for Interactive Projected Displays

Jay W. Summet; Matthew Flagg; Tat-Jen Cham; James M. Rehg; Rahul Sukthankar

A major problem with interactive displays based on front projection is that users cast undesirable shadows on the display surface. This paper demonstrates that shadows can be muted by redundantly illuminating the display surface using multiple projectors, all mounted at different locations. However, this technique alone does not eliminate shadows: Multiple projectors create multiple dark regions on the surface (penumbral occlusions) and cast undesirable light onto the users. These problems can be solved by eliminating shadows and suppressing the light that falls on occluding users by actively modifying the projected output. This paper categorizes various methods that can be used to achieve redundant illumination, shadow elimination, and blinding light suppression and evaluates their performance.


international conference on control, automation, robotics and vision | 2002

Projected light displays using visual feedback

James M. Rehg; Matthew Flagg; Tat-Jen Cham; Rahul Sukthankar; Gita Sukthankar

A system of coordinated projectors and cameras enables the creation of projected light displays that are robust to environmental disturbances. This paper describes approaches for tackling both geometric and photometric aspects of the problem: (1) the projected image remains stable even when the system components (projector, camera or screen) are moved; (2) the display automatically removes shadows caused by users moving between a projector and the screen, while simultaneously suppressing projected light on the user. The former can be accomplished without knowing the positions of the system components. The latter can be achieved without direct observation of the occluder. We demonstrate that the system responds quickly to environmental disturbances and achieves low steady-state errors.


IEEE Transactions on Visualization and Computer Graphics | 2013

Video-Based Crowd Synthesis

Matthew Flagg; James M. Rehg

As a controllable medium, video-realistic crowds are important for creating the illusion of a populated reality in special effects, games, and architectural visualization. While recent progress in simulation and motion captured-based techniques for crowd synthesis has focused on natural macroscale behavior, this paper addresses the complementary problem of synthesizing crowds with realistic microscale behavior and appearance. Example-based synthesis methods such as video textures are an appealing alternative to conventional model-based methods, but current techniques are unable to represent and satisfy constraints between video sprites and the scene. This paper describes how to synthesize crowds by segmenting pedestrians from input videos of natural crowds and optimally placing them into an output video while satisfying environmental constraints imposed by the scene. We introduce crowd tubes, a representation of video objects designed to compose a crowd of video billboards while avoiding collisions between static and dynamic obstacles. The approach consists of representing crowd tube samples and constraint violations with a conflict graph. The maximal independent set yields a dense constraint-satisyfing crowd composition. We present a prototype system for the capture, analysis, synthesis, and control of video-based crowds. Several results demonstrate the systems ability to generate videos of crowds which exhibit a variety of natural behaviors.


computer vision and pattern recognition | 2005

Improving the Speed of Virtual Rear Projection: A GPU-Centric Architecture

Matthew Flagg; Jay W. Summet; James M. Rehg

Projection is the only viable way to produce very large displays. Rear projection of large-scale upright displays is often preferred over front projection because of the lack of shadows that occlude the projected image. However, rear projection is not always a feasible option for space and cost reasons. Recent research suggests that many of the desirable features of rear projection, in particular lack of shadows, can be reproduced using active virtual rear projection (VRP). We present a new approach to shadow detection that addresses limitations with previous work. Furthermore, we demonstrate how to exploit the image processing capabilities of a GPU to shift the main performance bottleneck from image processing to camera capture and projector display rates. The improvements presented in this paper enable a speed increase in image processing from 15Hz to 110Hz in our new active VRP prototype.


acm multimedia | 2006

GVU-PROCAMS: enabling novel projected interfaces

Jay W. Summet; Matthew Flagg; James M. Rehg; Gregory D. Abowd; Neil R. Weston

Front projection allows large displays to be deployed relatively easily. However, it is sometimes difficult to find a location to place a projector, especially for ad-hoc installations. Additionally, front projection suffers from shadows and occlusions, making it ill-suited for interactive displays. The GVU-PROCAMS system allows programmers to deploy projectors and displays easily in arbitrary locations by enabling enhanced keystone correction via warping on 3D hardware. In addition, it handles the calibration of multiple projectors using computer vision to produce a redundantly illuminated surface. Redundant illumination offers robustness in the face of occlusions, providing a user with the experience of a rear-projected surface. This paper presents a stand-alone application (WinPVRP) and a programming system (GVU-PROCAMS) that easily allows others to create projected displays with enhanced warping and redundant illumination.

Collaboration


Dive into the Matthew Flagg's collaboration.

Top Co-Authors

Avatar

James M. Rehg

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jay W. Summet

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David Tsai

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tat-Jen Cham

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gregory D. Abowd

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Irfan A. Essa

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Neil R. Weston

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge