Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael W. Hansen is active.

Publication


Featured researches published by Michael W. Hansen.


workshop on applications of computer vision | 1994

Real-time scene stabilization and mosaic construction

Michael W. Hansen; P. Anandan; K. Dana; G. van der Wal; Peter J. Burt

We describe a real-time system designed to construct a stable view of a scene through aligning images of an incoming video stream and dynamically constructing an image mosaic. This system uses a video processing unit developed by the David Sarnoff Research Center called the Vision Front End (VFE-100) for the pyramid-based image processing tasks required to implement this process. This paper includes a description of the multiresolution coarse-to-fine image registration strategy, the techniques used for mosaic construction, the implementation of this process on the VFE-100 system, and experimental results showing image mosaics constructed with the VFE-100.<<ETX>>


Proceedings of the IEEE | 2001

Aerial video surveillance and exploitation

Rakesh Kumar; Harpreet S. Sawhney; Supun Samarasekera; Steve Hsu; Hai Tao; Yanlin Guo; Keith J. Hanna; Arthur R. Pope; Richard P. Wildes; David Hirvonen; Michael W. Hansen; Peter J. Burt

There is growing interest in performing aerial surveillance using video cameras. Compared to traditional framing cameras, video cameras provide the capability to observe ongoing activity within a scene and to automatically control the camera to track the activity. However, the high data rates and relatively small field of view of video cameras present new technical challenges that must be overcome before such cameras can be widely used. In this paper, we present a framework and details of the key components for real-time, automatic exploitation of aerial video for surveillance applications. The framework involves separating an aerial video into the natural components corresponding to the scene. Three major components of the scene are the static background geometry, moving objects, and appearance of the static and dynamic components of the scene. In order to delineate videos into these scene components, we have developed real time, image-processing techniques for 2-D/3-D frame-to-frame alignment, change detection, camera control, and tracking of independently moving objects in cluttered scenes. The geo-location of video and tracked objects is estimated by registration of the video to controlled reference imagery, elevation maps, and site models. Finally static, dynamic and reprojected mosaics may be constructed for compression, enhanced visualization, and mapping applications.


Proceedings Fifth IEEE International Workshop on Computer Architectures for Machine Perception | 2000

The Acadia vision processor

G. van der Wal; Michael W. Hansen; Michael Raymond Piacentino

Presented is a new 80 GOPS video-processing chip capable of performing video rate vision applications. These applications include real-time video stabilization, mosaicking, video fusion, motion-stereo and video enhancement. The new vision chip, code-named Acadia, is the result of over 15 years of research and development by Sarnoff in the areas of multi-resolution pyramid-based vision processing and efficient computational architectures. The Acadia chip represents the third generation of ASIC technology developed by Sarnoff, and incorporates the processing functions found in Sarnoffs earlier PYR-1 and PYR-2 pyramid processing chips as well as numerous other functions found in Sarnoff-developed video processing systems, including the PVT200. A demonstration board is being implemented and includes two video decoders, a video encoder and a PCI interface.


Pattern Recognition | 2001

Pattern-selective color image fusion

Luca Bogoni; Michael W. Hansen

Abstract This paper introduces pattern-selective color image fusion and shows how it can be applied to two domains of image enhancement: extension of dynamic range and depth of focus. Pattern-selective fusion methods provide a mechanism for combining multiple monochromatic source images through identifying salient features in the source images at multiple scales and orientations, and combining those features into a single composite image result. In this paper, the pattern-selective fusion method is generalized into a framework that is equally applicable to monochrome, color, and multi-spectral imagery. This proposed fusion framework is then used to combine a set of color source images, taken from a sensor with varying aperture and focus settings, into a single fused image result that has improved dynamic range and depth of field over any of the other frames in the input sequence. Experimental results show the performance of the dynamic range and depth-of-field extension on imagery taken from consumer-grade video camera equipment.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

Relaxation methods for supervised image segmentation

Michael W. Hansen; William E. Higgins

We propose two methods for supervised image segmentation: supervised relaxation labelling and watershed-driven relaxation labelling. The methods are particularly well suited to problems in 3D medical image analysis, where the images are large, the regions are topologically complex, and the tolerance of errors is low. Each method uses predefined cues for supervision. The cues can be defined interactively or automatically, depending on the application. The cues provide statistical region information and region topological constraints. Supervised relaxation labeling exhibits strong noise resilience. Watershed-driven relaxation labeling combines the strengths of watershed analysis and supervised relaxation labeling to give a computationally efficient noise-resistant method. Extensive results for 2D and 3D images illustrate the effectiveness of the methods.


computer vision and pattern recognition | 1998

Image alignment for precise camera fixation and aim

Lambert E. Wixson; Jayakrishnan Eledath; Michael W. Hansen; Robert Mandelbaum; Deepam Mishra

Two important problems in camera control are how to keep a moving camera fixated on a target point, and how to precisely aim a camera, whose approximate pose is known, towards a given 3D position. This paper describes how electronic image alignment techniques can be used to solve these problems, as well as provide other benefits such as stabilized video. Hence, stabilized, fixated imagery is obtained despite large latencies in the control loop, even for simple control strategies. These techniques have been tested using an airborne camera and real-time affine image alignment.


international conference on image analysis and processing | 1999

Image enhancement using pattern-selective color image fusion

Luca Bogoni; Michael W. Hansen; Peter J. Burt

This paper introduces a method for extending the effective dynamic range and depth of focus of a sensor through pattern-selective color image fusion. Pattern-selective fusion methods provide a mechanism for combining multiple monochromatic source images through identifying salient features in the source images at multiple scales and orientations, and combining those features into a single fused image result. In this paper the pattern-selective fusion method is generalized into a framework that is equally applicable to monochrome, color and multi-spectral imagery. This proposed fusion framework is then used to combine a set of color source images, taken from a sensor with varying aperture and focus settings, into a single fused image result that has improved dynamic range and depth-of-field over any of the other frames in the input sequence. Experimental results show the performance of the dynamic range and depth-of-field extension on imagery taken from consumer-grade video camera equipment.


international conference on image processing | 1994

Watershed-driven relaxation labeling for image segmentation

Michael W. Hansen; William E. Higgins

Introduces an image segmentation method referred to as watershed-driven relaxation labeling. The method is a hybrid segmentation process utilizing both watershed analysis and relaxation labeling. Initially, watershed analysis is used to subdivide an image into catchment basins, effectively clustering pixels together based on their spatial proximity and intensity homogeneity. Classification estimates in the form of probabilities are set for each of these catchment basins. Relaxation labeling is then used to iteratively refine and update the classifications of the catchment basins through propagating constraints and utilizing local information. The relaxation updating process is continued until a large majority of the catchment basins are unambiguously classified. The method provides fast, accurate segmentation results and exploits the individual strengths of watershed analysis and relaxation labeling. The robustness of the method is illustrated through comparisons to other popular segmentation techniques.<<ETX>>


field programmable custom computing machines | 1999

Reconfigurable elements for a video pipeline processor

Michael Raymond Piacentino; G.S. van der Wal; Michael W. Hansen

This paper describes a family of reconfigurable processing elements (RPEs) used to support video processing for the Sarnoff Vision Front End 200 (VFE-200) vision system. Within the VFE-200 RPEs have been used to estimate visual motion, compute 3D scene structure using stereo analysis, perform geometric transformations (warps) on imagery with interpolation, and to act as triple ported frame store memory units. The RPEs described in this paper incorporate complex DRAM memory control interfaces, high precision fixed- and floating-point arithmetic (including floating point division), and sophisticated hybrids of memory and computational functions. Within this paper, the architecture and implementation of the RPEs and the VFE-200 are described, and examples of how the RPEs are used to support specific computer vision functions at real-time video rates are presented.


IEEE Transactions on Image Processing | 1999

Watershed-based maximum-homogeneity filtering

Michael W. Hansen; William E. Higgins

We introduce an image enhancement method referred to as the watershed-based maximum-homogeneity filter. This method first uses watershed analysis to subdivide the image into homogeneous pixel clusters called catchment basins. Next, using an adaptive, local, catchment-basin selection scheme, similar neighboring catchment basins are combined together to produce an enhanced image. Because the method starts with watershed analysis, it can preserve edge information and run with high computational efficiency. Illustrative results show that the method performs well relative to other popular nonlinear filters.

Collaboration


Dive into the Michael W. Hansen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William E. Higgins

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michal Irani

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge