Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kwangyoen Wohn is active.

Publication


Featured researches published by Kwangyoen Wohn.


computer vision and pattern recognition | 1988

Pyramid based depth from focus

Trevor Darrell; Kwangyoen Wohn

A method is presented for depth recovery through the analysis of scene sharpness across changing focus position. Modeling a defocused image as the application of a low pass-filter on a properly focused image of the same scene, the authors can compare the high spatial frequency content of regions in each image and determine the correct focus position. Recovering depth in this manner is inherently a local operation, and can be done efficiently using a pipelined image processor. Laplacian and Gaussian pyramids are used to calculate maps which are collected and compared to find the focus position that maximizes high spatial frequencies for each region.<<ETX>>


Pattern Recognition Letters | 1990

Depth from focus using pyramid architecture

Trevor Darell; Kwangyoen Wohn

Abstract A method is presented for depth recovery through the analysis of scene sharpness across changing focus position. Modeling a defocused image as the application of a low pass filter on a properly focused image of the same scene, we can compare the high spatial frequency content of regions in each image and determine the correct focus position. Recovering depth in this manner is inherently a local operation, and can be done efficiently using a pipelined image processor. Laplacian and Gaussian pyramids are used to calculate sharpness maps which are collected and compared to find the focus position that maximizes high spatial frequencies for each region.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1990

The analytic structure of image flows: deformation and segmentation

Kwangyoen Wohn; Allen M. Waxman

Abstract Time-varying imagery is often described in terms of image flow fields (i.e., image motion), which correspond to the perceptive projection of feature motions in three dimensions (3D). In the case of multiple moving objects with smooth surfaces, the image flow possesses an analytic structure that reflects these 3D properties. This paper describes the analytic structure of image flow fields in the image space-time domain, and its use for segmentation and 3D motion computation. First we discuss thelocal flow structure as embodied in the concept ofneighborhood deformation. The local image deformation is effectively represented by a set of 12 basis deformations, each of which is responsible for an independent deformation. This local representation provides us with sufficient information for the recovery of 3D object structure and motion, in the case of relative rigid body motions. We next discuss theglobal flow structure embodied in the partitioning of the entire image plane intoanalytic regions separated byboundaries of analyticity, such that each small neighborhood within the analytic region is described in terms of deformation bases. This analysis reveals an effective mechanism for detecting the analytic boundaries of flow fields, thereby segmenting the image into meaningful regions. The notion ofconsistency which is often used in the image segmentation is made explicit by the mathematical notion ofanalyticity derived from the projection relation of 3D object motion. The concept of flow analyticity is then extended to the temporal domain, suggesting a more robust algorithm for recovering image flow from multiple frames. Finally, we argue that the process of flow segmentation can be understood in the framework of grouping process. The general concept ofcoherence orgrouping through local support (such as the second-order flows in our case) is discussed.


Pattern Recognition Letters | 1990

Estimating the finite displacement using moments

Kwangyoen Wohn; Jian Wu

Abstract We present an efficient method to estimate the image motion from the silhouette of an object across two frames. The method utilizes the change of moments up to the second order, plus the silhouette itself. Since the method utilizes the global measurement over the entire object, it is less sensitive to the noise and other digitization effects than the methods which rely on the local measurement.


[1989] Proceedings. Workshop on Visual Motion | 1989

Estimation of 3-D motion and structure based on a temporally-oriented approach with the method of regression

Siu-Leong Iu; Kwangyoen Wohn

It is argued that the 3-D velocity of a single point up to a scalar factor can be recovered from its 2-D trajectory under the perspective projection. The authors then extend this idea to the recovery of 3-D motion of rigid objects. In both cases measurements are collected through temporal axis first. The analysis is based on the assumption that the 3-D motion of object is smooth so that its 3-D velocity can be approximated as a truncated Taylor series of the predetermined degree. Regression relations between unknown motion parameters and measurements for a single point and rigid body are derived. The method of maximum likelihood is used to estimate the motion. The uniqueness of determining the 3-D motion of the single point is discussed. Experimental results obtained from simulated data and real images are given to illustrate the robustness of this approach.<<ETX>>


international conference on pattern recognition | 1990

Estimation of general rigid body motion from a long sequence of images

Siu-Leong Iu; Kwangyoen Wohn

The authors propose a new state formulation to analyze object motion with arbitrary orders of translation and rotation from a sequence of video images. An extended Kalman filter is used to find the estimates sequentially from noisy images. Simulations showed that the proposed formulation was quite effective for estimating nonconstant rigid body motion.<<ETX>>


visual communications and image processing | 1990

Segmentation, Modeling And Classification Of The Compact Objects In A Pile

Alok Gupta; Gareth Funka-Lea; Kwangyoen Wohn

We discuss the problem of interpreting dense range images obtained from the scene of a heap of man-made objects. We describe a range image interpretation system consisting of segmentation, modeling, verification, and classification procedures. First, the range image is segmented into regions and reasoning is done about the physical support of these regions. Second, for each region several possible 3-D interpretations are made based on various scenarios of the objects physical support. Finally each interpretation is tested against the data for its consistency. We have chosen the superquadric model as our 3-D shape descriptor, plus tapering deformations along the major axis. Experimental results obtained from some complex range images of mail pieces are reported to demonstrate the soundness and the robustness of our approach.


Pattern Recognition | 1991

Recovery of 3D motion of a single particle

Siu-Leong Iu; Kwangyoen Wohn

Abstract In our previous analysis reported elsewhere, we have shown that 3D velocity of a single point up to a scale factor could be recovered from its 2D trajectory under the perspective projection. We developed a batch method to solve the non-linear regression relation between motion parameters and measurements of projected position. The algorithm was tested on the simulated data and the real images. In this paper, we further extend our work along two directions. First, in order to facilitate the speed of the estimation process we took the recursive approach to estimate the motion parameters. Second, we investigate the performance degradation due to two classes of model mismatch: parameter jumping and undermodeling. Then we propose the Finite Lifetime Alternately Triggered Multiple Model Filter (FLAT MMF), as a solution. A number of experiments are conducted to illustrate the performance degradation due to the model mismatches and the performance improvement as the proposed FLAT MMF is used.


Cvgip: Image Understanding | 1991

On the deformation of image intensity and zero-crossing contours under motion

Jian Wu; Kwangyoen Wohn

Abstract Image intensity and edge are two major sources of information for estimating motion in the image plane. The 2-D motion obtained by analyzing the deformation of intensity and/or edges is used to recover the 3-D motion and structure. In this paper we show that the motion defined by the image intensity differs from the motion revealed by the (zero-crossing) edge. Understanding of this discrepancy is important since most of the 3-D motion recovery algorithms reported so far require accurate 2-D motion as their input. We begin the discussion by assuming the invariance of intensity, that the evolution of image intensity manifests the underlying transformation of the image due solely to the motion of objects. We then raise the question whether the zero crossing of the Laplacian operating on the image intensity is invariant too. The change of perspective view due to relative motion results in the zero crossing not being preserved as the image evolves. We derive how much the zero-crossing contour deviates from its “correct” position due to motion. Our analysis shows that the deviation is inversely proportional to the third derivatives of image intensity. The result may be used to determine the agreement between the motion obtained from the zero-crossing contours and from the intensity Change.


visual communications and image processing | 1990

Recovery of 3-D Motion of a Single Particle

Siu-Leong Iu; Kwangyoen Wohn

In our previous analysis reported elsewhere, we have shown that 3-D velocity of a single point up to a scale factor could be recovered from its 2-D trajectory under the perspective projection. A batch method was used to solve the non-linear regression relation between motion parameters and measurements of projected position. Experimental results obtained from simulated images were given to demonstrate the soundness of this approach. In this paper, we extend our work along two directions. First, in order to make the estimation process meet the constraint of real-time computation, we derive the recursive approach to estimate the motion parameters. Second, we investigate the performance degradation due to two classes of model mismatch: parameter jumping and undermodeling. Then we propose the finite lifetime asynchronically triggered multiple model filter (FLAT MMF), as a solution. A number of simulations are conducted to illustrate the estimation performance degradation due to the model mismatches and the performance improvement as the proposed FLAT MMF is used.

Collaboration


Dive into the Kwangyoen Wohn's collaboration.

Top Co-Authors

Avatar

Siu-Leong Iu

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Alok Gupta

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pramath Raj Sinha

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trevor Darrell

University of California

View shared research outputs
Top Co-Authors

Avatar

Franc Solina

University of Ljubljana

View shared research outputs
Researchain Logo
Decentralizing Knowledge