Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Juyang Weng is active.

Publication


Featured researches published by Juyang Weng.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1992

Camera calibration with distortion models and accuracy evaluation

Juyang Weng; Paul Cohen; Marc Herniou

A camera model that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortions is presented. The proposed calibration procedure consists of two steps: (1) the calibration parameters are estimated using a closed-form solution based on a distribution-free camera model; and (2) the parameters estimated in the first step are improved iteratively through a nonlinear optimization, taking into account camera distortions. According to minimum variance estimation, the objective function to be minimized is the mean-square discrepancy between the observed image points and their inferred image projections computed with the estimated calibration parameters. The authors introduce a type of measure that can be used to directly evaluate the performance of calibration and compare calibrations among different systems. The validity and performance of the calibration procedure are tested with both synthetic data and real images taken by tele- and wide-angle lenses. >


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1989

Motion and structure from two perspective views: algorithms, error analysis, and error estimation

Juyang Weng; Thomas S. Huang; Narendra Ahuja

Deals with estimating motion parameters and the structure of the scene from point (or feature) correspondences between two perspective views. An algorithm is presented that gives a closed-form solution for motion parameters and the structure of the scene. The algorithm utilizes redundancy in the data to obtain more reliable estimates in the presence of noise. An approach is introduced to estimating the errors in the motion parameters computed by the algorithm. Specifically, standard deviation of the error is estimated in terms of the variance of the errors in the image coordinates of the corresponding points. The estimated errors indicate the reliability of the solution as well as any degeneracy or near degeneracy that causes the failure of the motion estimation algorithm. The presented approach to error estimation applies to a wide variety of problems that involve least-squares optimization or pseudoinverse. Finally the relationships between errors and the parameters of motion and imaging system are analyzed. The results of the analysis show, among other things, that the errors are very sensitive to the translation direction and the range of field view. Simulations are conducted to demonstrate the performance of the algorithms and error estimation as well as the relationships between the errors and the parameters of motion and imaging systems. The algorithms are tested on images of real-world scenes with point of correspondences computed automatically. >


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

Candid covariance-free incremental principal component analysis

Juyang Weng; Yilu Zhang; Wey Shiuan Hwang

Appearance-based image analysis techniques require fast computation of principal components of high-dimensional image vectors. We introduce a fast incremental principal component analysis (IPCA) algorithm, called candid covariance-free IPCA (CCIPCA), used to compute the principal components of a sequence of samples incrementally without estimating the covariance matrix (so covariance-free). The new method is motivated by the concept of statistical efficiency (the estimate has the smallest variance given the observed data). To do this, it keeps the scale of observations and computes the mean of observations incrementally, which is an efficient estimate for some well known distributions (e.g., Gaussian), although the highest possible efficiency is not guaranteed in our case because of unknown sample distribution. The method is for real-time applications and, thus, it does not allow iterations. It converges very fast for high-dimensional image vectors. Some links between IPCA and the development of the cerebral cortex are also discussed.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1992

Matching two perspective views

Juyang Weng; Narendra Ahuja; Thomas S. Huang

A computational approach to image matching is described. It uses multiple attributes associated with each image point to yield a generally overdetermined system of constraints, taking into account possible structural discontinuities and occlusions. In the algorithm implemented, intensity, edgeness, and cornerness attributes are used in conjunction with the constraints arising from intraregional smoothness, field continuity and discontinuity, and occlusions to compute dense displacement fields and occlusion maps along the pixel grids. The intensity, edgeness, and cornerness are invariant under rigid motion in the image plane. In order to cope with large disparities, a multiresolution multigrid structure is employed. Coarser level edgeness and cornerness measures are obtained by blurring the finer level measures. The algorithm has been tested on real-world scenes with depth discontinuities and occlusions. A special case of two-view matching is stereo matching, where the motion between two images is known. The algorithm can be easily specialized to perform stereo matching using the epipolar constraint. >


Archive | 1989

Motion and structure from image sequences

Juyang Weng; Narendra Ahuja; Thomas S. Huang

Estimating motion and structure of the scene from image sequences is a very important and active research area in computer vision. The results of research have applications in vision-guided navigation, robot vision, 3-D object recognition and manipulation etc. Many theoretical results and new techniques developed may also apply to the related problems of other fields. Computing the image displacement field, or matching two images is one of the difficult problems in motion analysis. A computational approach to image matching has been developed that uses multiple attributes associated with images to yield a generally overdetermined system of matching constraints, taking into account possible structural discontinuities and occlusions. From the computed image displacement field, the next step is to compute the motion parameters and the structure of the scene. A two-step approach is introduced to solve the nonlinear optimization problem reliably and efficiently. The uniqueness of solution, robustness of the solution in the presence of noise, estimation of errors, dependency of the reliability of solution on motion, scene, and the parameters of image sensors have been investigated. It is analyzed that a batch processing technique (Levenberg-Marquardt nonlinear least-squares method) generally performs better than a sequential processing technique (iterated extended Kalman filtering) for nonlinear problems. For those problems where estimates are needed before all the data are acquired, a recursive batch processing technique has been developed to improve performance and computational efficiency. The performance of the motion estimation algorithm has essentially reached the Cramer-Rao bound. The algorithm has been applied to real world scenes with depth discontinuities and occlusions to compute motion parameters, dense depth maps and occlusion maps, from two images taken at different unknown positions and orientations relative to the scene. The standard discrepancy between the projection of the inferred 3-D scene and the actually observed projection is as small as one half of a pixel. Other problems investigated include: (1) motion and structure from point correspondences for planar scenes. (2) motion and structure from line correspondences. (3) dynamic motion estimation and prediction from long image sequences.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1987

3-D Motion Estimation, Understanding, and Prediction from Noisy Image Sequences

Juyang Weng; Thomas S. Huang; Narendra Ahuja

This paper presents an approach to understanding general 3-D motion of a rigid body from image sequences. Based on dynamics, a locally constant angular momentum (LCAM) model is introduced. The model is local in the sense that it is applied to a limited number of image frames at a time. Specifically, the model constrains the motion, over a local frame subsequence, to be a superposition of precession and translation. Thus, the instantaneous rotation axis of the object is allowed to change through the subsequence. The trajectory of the rotation center is approximated by a vector polynomial. The parameters of the model evolve in time so that they can adapt to long term changes in motion characteristics. The nature and parameters of short term motion can be estimated continuously with the goal of understanding motion through the image sequence. The estimation algorithm presented in this paper is linear, i.e., the algorithm consists of solving simultaneous linear equations. Based on the assumption that the motion is smooth, object positions and motion in the near future can be predicted, and short missing subsequences can be recovered. Noise smoothing is achieved by overdetermination and a leastsquares criterion. The framework is flexible in the sense that it allows both overdetermination in number of feature points and the number of image frames.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1992

Motion and structure from line correspondences; closed-form solution, uniqueness, and optimization

Juyang Weng; Thomas S. Huang; Narendra Ahuja

This work discusses estimating motion and structure parameters from line correspondences of a rigid scene. The authors present a closed-form solution to motion and structure parameters from line correspondences through three monocular perspective views. The algorithm makes use of redundancy in the data to improve the accuracy of the solutions. The uniqueness of the solution is established, and necessary and sufficient conditions for degenerate spatial line configurations are given. Optimization has been employed to further improve the accuracy of the estimates in the presence of noise. Simulations have shown that the errors of the optimized estimates are close to the theoretical lower error bound. >


International Journal of Humanoid Robotics | 2004

DEVELOPMENTAL ROBOTICS: THEORY AND EXPERIMENTS

Juyang Weng

A hand-designed internal representation of the world cannot deal with unknown or uncontrolled environments. Motivated by human cognitive and behavioral development, this paper presents a theory, an architecture, and some experimental results for developmental robotics. By a developmental robot, we mean that the robot generates its “brain” (or “central nervous system,” including the information processor and controller) through online, real-time interactions with its environment (including humans). A new Self-Aware Self-Effecting (SASE) agent concept is proposed, based on our SAIL and Dav developmental robots. The manual and autonomous development paradigms are formulated along with a theory of representation suited for autonomous development. Unlike traditional robot learning, the tasks that a developmental robot ends up learning are unknown during the programming time so that the task-specific representation must be generated and updated through real-time “living” experiences. Experimental results with SAIL and Dav developmental robots are presented, including visual attention selection, autonomous navigation, developmental speech learning, range-based obstacle avoidance, and scaffolding through transfer and chaining.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

Optimal registration of object views using range data

Chitra Dorai; Juyang Weng; Anil K. Jain

This paper deals with robust registration of object views in the presence of uncertainties and noise in depth data. Errors in registration of multiple views of a 3D object severely affect view integration during automatic construction of object models. We derive a minimum variance estimator (MVE) for computing the view transformation parameters accurately from range data of two views of a 3D object. The results of our experiments show that view transformation estimates obtained using MVE are significantly more accurate than those computed with an unweighted error criterion for registration.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1999

Hierarchical discriminant analysis for image retrieval

Daniel L. Swets; Juyang Weng

A self-organizing framework for object recognition is described. We describe a hierarchical database structure for image retrieval. The self-organizing hierarchical optimal subspace learning and inference framework (SHOSLIF) system uses the theories of optimal linear projection for optimal feature derivation and a hierarchical structure to achieve logarithmic retrieval complexity. A space-tessellation tree is generated using the most expressive features (MEF) and most discriminating features (MDF) at each level of the tree. The major characteristics of the analysis include: (1) avoiding the limitation of global linear features by deriving a recursively better-fitted set of features for each of the recursively subdivided sets of training samples; (2) generating a smaller tree whose cell boundaries separate the samples along the class boundaries better than the principal component analysis, thereby giving a better generalization capability (i.e., better recognition rate in a disjoint test); (3) accelerating the retrieval using a tree structure for data pruning, utilizing a different set of discriminant features at each level of the tree. We allow for perturbations in the size and position of objects in the images through learning. We demonstrate the technique on a large image database of widely varying real-world objects taken in natural settings, and show the applicability of the approach for variability in position, size, and 3D orientation. This paper concentrates on the hierarchical partitioning of the feature spaces.

Collaboration


Dive into the Juyang Weng's collaboration.

Top Co-Authors

Avatar

Matthew D. Luciw

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Yilu Zhang

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shuqing Zeng

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Xiao Huang

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Zhengping Ji

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge