Douglas R. Heisterkamp
Oklahoma State University–Stillwater
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Douglas R. Heisterkamp.
IEEE Transactions on Neural Networks | 2003
Jing Peng; Douglas R. Heisterkamp; H. K. Dai
Nearest neighbor (NN) classification relies on the assumption that class conditional probabilities are locally constant. This assumption becomes false in high dimensions with finite samples due to the curse of dimensionality. The NN rule introduces severe bias under these conditions. We propose a locally adaptive neighborhood morphing classification method to try to minimize bias. We use local support vector machine learning to estimate an effective metric for producing neighborhoods that are elongated along less discriminant feature dimensions and constricted along most discriminant ones. As a result, the class conditional probabilities can be expected to be approximately constant in the modified neighborhoods, whereby better classification performance can be achieved. The efficacy of our method is validated and compared against other competing techniques using a number of datasets.
international conference on pattern recognition | 2002
Douglas R. Heisterkamp
This paper proposes a novel view of the information generated by relevance feedback. The latent semantic analysis is adapted to this view to extract useful inter-query information. The view presented in this paper is that the fundamental vocabulary of the system is the images in the database and that relevance feedback is a document whose words are the images. A relevance feedback document contains the intra-query information which expresses the semantic intent of the user over that query. The inter-query information then takes the form of a collection of documents which can be subjected to latent semantic analysis. An algorithm to query the latent semantic index is presented and evaluated against real data sets.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2004
Jing Peng; Douglas R. Heisterkamp; H. K. Dai
Nearest neighbor classification assumes locally constant class conditional probabilities. This assumption becomes invalid in high dimensions due to the curse-of-dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. We propose an adaptive nearest neighbor classification method to try to minimize bias. We use quasiconformal transformed kernels to compute neighborhoods over which the class probabilities tend to be more homogeneous. As a result, better classification performance can be expected. The efficacy of our method is validated and compared against other competing techniques using a variety of data sets.
international conference on pattern recognition | 2002
Jing Peng; Douglas R. Heisterkamp; H. K. Dai
Nearest neighbor classification assumes locally constant class conditional probabilities. This assumption becomes invalid in high dimensions due to the curse-of-dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. We propose an adaptive nearest neighbor classification method to try to minimize bias. We use quasi-conformal transformed kernels to compute neighborhoods over which the class probabilities tend to be more homogeneous. As a result, better classification performance can be expected. The efficacy of our method is validated and compared against other competing techniques using a variety of data sets.
computer vision and pattern recognition | 2001
Douglas R. Heisterkamp; Jing Peng; H. K. Dai
The paper presents a novel approach to ranking relevant images for retrieval. Distance in the feature space associated with a kernel is used to rank relevant images. An adaptive quasiconformal mapping based on relevance feedback is used to generate successive new kernels. The effect of the quasiconformal mapping is a change in the spatial resolution of the feature space. The spatial resolution around irrelevant samples is dilated, whereas the spatial resolution around relevant samples is contracted. This new space created by the quasiconformal kernel is used to measure the distance between the query and the images in the database. An interesting interpretation of the metric is found by looking at the Taylor series approximation to the original kernel. Then the squared distance in the feature space can be seen as a combination of a parzen window estimate of the squared Chi-squared distance and a weighted squared Euclidean distance. Experimental results using real-world data validate the efficacy of our method.
computer vision and pattern recognition | 2004
Iker Gondra; Douglas R. Heisterkamp
Relevance feedback approaches based on support vector machine (SVM) learning have been applied to significantly improve retrieval performance in content-based image retrieval (CBIR). Those approaches require the use of fixed-length image representations because SVM kernels represent an inner product in a feature space that is a non-linear transformation of the input space. Many region-based CBIR approaches create a variable length image representation and define a similarity measure between two variable length representations. The standard SVM approach cannot be applied to this approach because it violates the requirements that SVM places on the kernel. Fortunately, a generalized SVM (GSVM) has been developed that allows the use of an arbitrary kernel. In this paper, we present an initial investigation into utilizing a GSVM-based relevance feedback learning algorithm. Since GSVM does not place restrictions on the kernel, any image similarity measure can be used. In particular, the proposed approach uses an image similarity measure developed for region-based, variable length representations. Experimental results over real world images demonstrate the efficacy of the proposed method.
international conference on image processing | 2003
Jing Peng; Douglas R. Heisterkamp
Relevance feedback is an attractive approach to developing flexible metrics for content-based retrieval in image and video databases. Large image databases require an index structure in order to reduce nearest neighbor computation. However, flexible metrics can alter an input space in a highly nonlinear fashion, thereby rendering the index structure useless. Few systems have been developed that address the apparent flexible metric/indexing dilemma. This paper proposes kernel indexing to try to address this dilemma. The key observation is that kernel metrics may be nonlinear and highly dynamic in the input space but remain Euclidean in induced feature space. It is this linear invariance in feature space that enables us to learn arbitrary relevance functions without changing the index in feature space. As a result, kernel indexing supports efficient relevance feedback retrieval in large image databases. Experimental results using a large set of image data are very promising.
international conference on pattern recognition | 2000
Douglas R. Heisterkamp; Jing Peng; H. K. Dai
Probabilistic feature relevance learning (PFRL) is an effective technique for adaptively computing local feature relevance for content-based image retrieval. It however becomes less attractive in situations where all the input variables have the same local relevance, and yet retrieval performance might still be improved by simple query shifting. We propose a retrieval method that combines feature relevance learning and query shifting to try to achieve the best of both worlds. We use a linear discriminant analysis to compute the new query and exploit the local neighborhood structure centered at the new query by invoking PFRL. As a result, the modified neighborhoods at the new query tend to contain sample images that are more relevant to the input query. The efficacy of our method is validated using both synthetic and real world data.
Multimedia Tools and Applications | 2005
Douglas R. Heisterkamp; Jing Peng
Many data partitioning index methods perform poorly in high dimensional space and do not support relevance feedback retrieval. The vector approximation file (VA-File) approach overcomes some of the difficulties of high dimensional vector spaces, but cannot be applied to relevance feedback retrieval using kernel distances in the data measurement space. This paper introduces a novel KVA-File (kernel VA-File) that extends VA-File to kernel-based retrieval methods. An efficient approach to approximating vectors in an induced feature space is presented with the corresponding upper and lower distance bounds. Thus an effective indexing method is provided for kernel-based relevance feedback image retrieval methods. Experimental results using large image data sets (approximately 100,000 images with 463 dimensions of measurement) validate the efficacy of our method.
acm international workshop on multimedia databases | 2003
Douglas R. Heisterkamp; Jing Peng
Many data partitioning index methods perform poorly in high dimensional space and do not support relevance feedback retrieval. The vector approximation file (VA-File) approach overcomes some of the difficulties of high dimensional vector spaces, but cannot be applied to relevance feedback retrieval using kernel distances in the data measurement space. This paper introduces a novel KVA-File (kernel VA-File) that extends VA-File to kernel-based retrieval methods. A key observation is that kernel distances may be non-linear in the data measurement space but is still linear in an induced feature space. It is this linear invariance in the induced feature space that enables KVA-File to work with kernel distances. An efficient approach to approximating vectors in an induced feature space is presented with the corresponding upper and lower distance bounds. Thus an effective indexing method is provided for kernel-based relevance feedback image retrieval methods. Experimental results using large image data sets (approximately 100,000 images with 463 dimensions of measurement) validate the efficacy of our method.