Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John J. Weng is active.

Publication


Featured researches published by John J. Weng.


computer vision and pattern recognition | 1996

Hand segmentation using learning-based prediction and verification for hand sign recognition

Yuntao Cui; John J. Weng

This paper presents a prediction-and-verification segmentation scheme wing attention images from multiple fixations. A major advantage of this scheme is that it can handle a large number of different deformable objects presented in complex backgrounds. The scheme is also relatively efficient since the segmentation is guided by the past knowledge through a prediction-and-verification scheme. The system has been tested to segment hands in the sequences of intensity images, where each sequence represents a hand sign. The experimental result showed a 95% correct segmentation rate with a 3% false rejection rate.


International Journal of Computer Vision | 1993

Image matching using the windowed Fourier phase

John J. Weng

A theoretical framework is presented in which windowed Fourier phase (WFP) is introduced as the primary matching primitive. Zero-crossings and peaks correspond to special values of the phase. The WFP is quasi-linear and dense; and its spatial period and slope are controlled by the scale. This framework has the following important characteristics: 1) matching primitives are available almost everywhere to convey dense disparity information in every channel, either coarse or fine; 2) the false-target problem is significantly mitigated; 3) the matching is easier, uniform, and can be performed by a network suitable for parallel computer architecture; 4) the matching is fast since very few iterations are needed. In fact, the WFP is so informative that the original signal can be uniquely determined up to a multiplicative constant by the WFP in any channel. The use of phase as matching primitive is also supported by some existing psychophysical and neurophysiological studies. An implementation of the proposed theory has shown good results from synthesized and natural images.


international conference on image processing | 1995

Genetic algorithms for object recognition in a complex scene

Daniel L. Swets; Bill Punch; John J. Weng

A realworld computer vision module must deal with a wide variety of environmental parameters. Object recognition, one of the major tasks of this vision module, typically requires a preprocessing step to locate objects in the scenes that ought to be recognized. Genetic algorithms are a search technique for dealing with a very large search space, such as the one encountered in image segmentation or object recognition. The article describes a technique for using genetic algorithms to combine the image segmentation and object recognition steps for a complex scene. The results show that this approach is a viable method for successfully combining the image segmentation and object recognition steps for a computer vision module.


international conference on automatic face and gesture recognition | 1996

Hand sign recognition from intensity image sequences with complex backgrounds

Yuntao Cui; John J. Weng

In this paper, we have presented a new approach to recognize hand signs. In our approach, motion understanding (the hand movement) is tightly coupled with spatial recognition (hand shape). The system uses the multiclass, multidimensional discriminant analysis to automatically select the most discriminating features for gesture classification. A recursive partition tree approximator is proposed to do classification. This approach combined with our previous work on the hand segmentation forms a new framework which addresses three key aspects of the hand sign interpretation, that is the hand shape, the location, and the movement. The framework has been tested to recognize 28 different hand signs. The experimental results show that the system can achieve a 93.1% recognition rate for test sequences that have not been used in the training phase.


International Journal of Computer Vision | 1997

Learning Recognition and Segmentation Using the Cresceptron

John J. Weng; Narendra Ahuja; Thomas S. Huang

This paper presents a framework called Cresceptron for view-based learning, recognition and segmentation. Specifically, it recognizes and segments image patterns that are similar to those learned, using a stochastic distortion model and view-based interpolation, allowing other view points that are moderately different from those used in learning. The learning phase is interactive. The user trains the system using a collection of training images. For each training image, the user manually draws a polygon outlining the region of interest and types in the label of its class. Then, from the directional edges of each of the segmented regions, the Cresceptron uses a hierarchical self-organization scheme to grow a sparsely connected network automatically, adaptively and incrementally during the learning phase. At each level, the system detects new image structures that need to be learned and assigns a new neural plane for each new feature. The network grows by creating new nodes and connections which memorize the new image structures and their context as they are detected. Thus, the structure of the network is a function of the training exemplars. The Cresceptron incorporates both individual learning and class learning; with the former, each training example is treated as a different individual while with the latter, each example is a sample of a class. In the performance phase, segmentation and recognition are tightly coupled. No foreground extraction is necessary, which is achieved by backtracking the response of the network down the hierarchy to the image parts contributing to recognition. Several stochastic shape distortion models are analyzed to show why multilevel matching such as that in the Cresceptron can deal with more general stochastic distortions that a single-level matching scheme cannot. The system is demonstrated using images from broadcast television and other video segments to learn faces and other objects, and then later to locate and to recognize similar, but possibly distorted, views of the same objects.


international symposium on computer vision | 1995

Efficient content-based image retrieval using automatic feature selection

Daniel L. Swets; John J. Weng

We describe a self-organizing framework for content-based retrieval of images from large image databases at the object recognition level. The system uses the theories of optimal projection for optimal feature selection and a hierarchical image database for rapid retrieval rates. We demonstrate the query technique on a large database of widely varying real-world objects in natural settings, and show the applicability of the approach even for large variability within a particular object class.


international conference on computer vision | 1993

Learning recognition and segmentation of 3-D objects from 2-D images

John J. Weng; Narendra Ahuja; Thomas S. Huang

A framework called Cresceptron is introduced for automatic algorithm design through learning of concepts and rules, thus deviating from the traditional mode in which humans specify the rules constituting a vision algorithm. With the Cresceptron, humans as designers need only to provide a good structure for learning, but they are relieved of most design details. The Cresceptron has been tested on the task of visual recognition by recognizing 3-D general objects from 2-D photographic images of natural scenes and segmenting the recognized objects from the cluttered image background. The Cresceptron uses a hierarchical structure to grow networks automatically, adaptively, and incrementally through learning. The Cresceptron makes it possible to generalize training exemplars to other perceptually equivalent items. Experiments with a variety of real-world images are reported to demonstrate the feasibility of learning in the Cresceptron.<<ETX>>


international conference on pattern recognition | 1996

View-based hand segmentation and hand-sequence recognition with complex backgrounds

Yuntao Cui; John J. Weng

In this paper, we presents a three-stage framework to analyze time-varying image sequences. The focus of this paper is the second stage: segmentation. We propose a prediction-and-verification segmentation scheme which efficiently utilizes the attention images from the multiple fixations. The experimental results show 95% correct segmentation rate with 3% false rejection rate of 805 testing images. The recognition of hand sign based on the segmentation results has shown that the system has achieved a good performance for this very difficult vision task.


ieee international conference on automatic face and gesture recognition | 1998

Toward automation of learning: the state self-organization problem for a face recognizer

John J. Weng; Wey-Shiuan Hwang

The capability of recognition is critical in learning but variation of sensory input makes recognition a very challenging task. The current technology in computer vision and pattern recognition requires humans to collect images, store images, segment images for computers and train computer recognition systems using these images. It is unlikely that such a manual labor process can meet the demands of many challenging recognition tasks that are critical for generating intelligent behavior, such as face recognition, object recognition and speech recognition. Our goal is to enable machines to learn directly from sensory input streams while interacting with the environment including human teachers. While doing so, the human teacher is not allowed to dictate the internal state value of the system. He or she can influence the system through only the systems sensors and effectors. Such a capability requires a fundamentally new way of addressing the learning problem, one that unifies learning and performance phases and requires a systematic self-organization capability. This paper concentrates on the state self-organization problem. We apply the method to autonomous face recognition.


Pattern Recognition Letters | 1996

Estimation of ellipse parameters using optimal minimum variance estimator

Yuntao Cui; John J. Weng; Herbert Reynolds

In this paper, we propose an unbiased minimum variance estimator to estimate the parameters of an ellipse. A space decomposition scheme is presented to direct the search of the optimal parameters. Experimental results have shown the dramatic improvement over existing weighted least sum of squares approaches, especially when the ellipse is occluded.

Collaboration


Dive into the John J. Weng's collaboration.

Top Co-Authors

Avatar

Yuntao Cui

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nan Zhang

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Anil K. Jain

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chitra Dorai

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Dan Judd

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Nalini K. Ratha

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Shaoyun Chen

Michigan State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge