Peter Barnum
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter Barnum.
International Journal of Computer Vision | 2010
Peter Barnum; Srinivasa G. Narasimhan; Takeo Kanade
Dynamic weather such as rain and snow causes complex spatio-temporal intensity fluctuations in videos. Such fluctuations can adversely impact vision systems that rely on small image features for tracking, object detection and recognition. While these effects appear to be chaotic in space and time, we show that dynamic weather has a predictable global effect in frequency space. For this, we first develop a model of the shape and appearance of a single rain or snow streak in image space. Detecting individual streaks is difficult even with an accurate appearance model, so we combine the streak model with the statistical characteristics of rain and snow to create a model of the overall effect of dynamic weather in frequency space. Our model is then fit to a video and is used to detect rain or snow streaks first in frequency space, and the detection result is then transferred to image space. Once detected, the amount of rain or snow can be reduced or increased. We demonstrate that our frequency analysis allows for greater accuracy in the removal of dynamic weather and in the performance of feature extraction than previous pixel-based or patch-based methods. We also show that unlike previous techniques, our approach is effective for videos with both scene and camera motions.
international conference on computer graphics and interactive techniques | 2010
Peter Barnum; Srinivasa G. Narasimhan; Takeo Kanade
We present a multi-layered display that uses water drops as voxels. Water drops refract most incident light, making them excellent wide-angle lenses. Each 2D layer of our display can exhibit arbitrary visual content, creating a layered-depth (2.5D) display. Our system consists of a single projector-camera system and a set of linear drop generator manifolds that are tightly synchronized and controlled using a computer. Following the principles of fluid mechanics, we are able to accurately generate and control drops so that, at any time instant, no two drops occupy the same projector pixels line-of-sight. This drop control is combined with an algorithm for space-time division of projector light rays. Our prototype system has up to four layers, with each layer consisting of an row of 50 drops that can be generated at up to 60 Hz. The effective resolution of the display is 50x projector vertical-resolution x number of layers. We show how this water drop display can be used for text, videos, and interactive games.
international symposium on mixed and augmented reality | 2009
Peter Barnum; Yaser Sheikh; Ankur Datta; Takeo Kanade
This paper presents a method to create an illusion of seeing moving objects through occluding surfaces in a video. This illusion is achieved by transferring information from a camera viewing the occluded area. In typical view interpolation approaches for 3D scenes, some form of correspondence across views is required. For occluded areas, establishing direct correspondence is impossible as information is missing in one of the views. Instead, we use a 2D projective invariant to capture information about occluded objects (which may be moving). Since invariants are quantities that do not change across views, a visually compelling rendering of hidden areas is achieved without the need for explicit correspondences. A piece-wise planar model of the scene allows the entire rendering process to take place without any 3D reconstruction, while still producing visual parallax. Because of the simplicity and robustness of the 2D invariant, we are able to transfer both static backgrounds and moving objects in real time. A complete working system has been implemented that runs live at 5Hz. Applications for this technology include the ability to look through corners at tight intersections for automobile safety, concurrent visualization of a surveillance camera network, and monitoring systems for patients/elderly/children.
computer vision and pattern recognition | 2009
Peter Barnum; Srinivasa G. Narasimhan; Takeo Kanade
Various non-traditional media, such as water drops, mist, and fire, have been used to create vibrant two and three dimensional displays. Usually such displays require a great deal of design and engineering. In this work, we show a computer vision based approach to easily calibrate and learn the properties of a three-dimensional water drop display, using a few pieces of off-the-shelf hardware. Our setup consists of a camera, projector, laser plane, and water drop generator. Based on the geometric calibration between the hardware, a user can “paint” the drops from the point of view of the camera, causing the projector to illuminate them with the correct color at the correct time. We first demonstrate an algorithm for the case where no drop occludes another from the point of view of either camera or projector. If there is no occlusion, the system can be trained once, and the projector plays a precomputed movie. We then show our work toward a display with real rain. In real time, our system tracks and predicts the future location of hundreds of drops per second, then projects rays to hit or miss each drop.
Proceedings of the First International Workshop on Photometric Analysis For Computer Vision - PACV 2007 | 2007
Peter Barnum; Takeo Kanade; Srinivasa G. Narasimhan
international symposium on biomedical imaging | 2008
Peter Barnum; Mei Chen; Hiroshi Ishikawa; Gadi Wollstein; Joel S. Schuman
Archive | 2004
Christopher M. Brown; Peter Barnum; Dave Costello; George Ferguson; Bo Hu; Michael Van Wie
national conference on artificial intelligence | 2005
Christopher M. Brown; George Ferguson; Peter Barnum; Bo Hu; Dave Costello
Archive | 2003
Peter Barnum; Bo Hu; Christopher M. Brown
Archive | 2003
Bo Hu; Peter Barnum; Christopher M. Brown