V. Paul Pauca
Wake Forest University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by V. Paul Pauca.
Computational Statistics & Data Analysis | 2007
Michael W. Berry; Murray Browne; Amy N. Langville; V. Paul Pauca; Robert J. Plemmons
The development and use of low-rank approximate nonnegative matrix factorization (NMF) algorithms for feature extraction and identification in the fields of text mining and spectral data analysis are presented. The evolution and convergence properties of hybrid methods based on both sparsity and smoothness constraints for the resulting nonnegative matrix factors are discussed. The interpretability of NMF outputs in specific contexts are provided along with opportunities for future work in the modification of NMF algorithms for large-scale and time-varying data sets.
Information Processing and Management | 2006
Farial Shahnaz; Michael W. Berry; V. Paul Pauca; Robert J. Plemmons
A methodology for automatically identifying and clustering semantic features or topics in a heterogeneous text collection is presented. Textual data is encoded using a low rank nonnegative matrix factorization algorithm to retain natural data nonnegativity, thereby eliminating the need to use subtractive basis vector and encoding calculations present in other techniques such as principal component analysis for semantic feature abstraction. Existing techniques for nonnegative matrix factorization are reviewed and a new hybrid technique for nonnegative matrix factorization is proposed. Performance evaluations of the proposed method are conducted on a few benchmark text collections used in standard topic detection studies.
Journal of The Optical Society of America A-optics Image Science and Vision | 2008
Qiang Zhang; Han Wang; Robert J. Plemmons; V. Paul Pauca
An important and well-studied problem in hyperspectral image data applications is to identify materials present in the object or scene being imaged and to quantify their abundance in the mixture. Due to the increasing quantity of data usually encountered in hyperspectral datasets, effective data compression is also an important consideration. In this paper, we develop novel methods based on tensor analysis that focus on all three of these goals: material identification, material abundance estimation, and data compression. Test results are reported in all three perspectives.
conference on advanced signal processing algorithms architectures and implemenations | 2004
Robert J. Plemmons; Michael Horvath; Emily Leonhardt; V. Paul Pauca; Sudhakar Prasad; Stephen B. Robinson; Harsha Setty; Todd C. Torgersen; Joseph van der Gracht; Edward R. Dowski; Ramkumar Narayanswamy; Paulo E. X. Silveira
Computational imaging systems are modern systems that consist of generalized aspheric optics and image processing capability. These systems can be optimized to greatly increase the performance above systems consisting solely of traditional optics. Computational imaging technology can be used to advantage in iris recognition applications. A major difficulty in current iris recognition systems is a very shallow depth-of-field that limits system usability and increases system complexity. We first review some current iris recognition algorithms, and then describe computational imaging approaches to iris recognition using cubic phase wavefront encoding. These new approaches can greatly increase the depth-of-field over that possible with traditional optics, while keeping sufficient recognition accuracy. In these approaches the combination of optics, detectors, and image processing all contribute to the iris recognition accuracy and efficiency. We describe different optimization methods for designing the optics and the image processing algorithms, and provide laboratory and simulation results from applying these systems and results on restoring the intermediate phase encoded images using both direct Wiener filter and iterative conjugate gradient methods.
conference on advanced signal processing algorithms architectures and implemenations | 2004
Sudhakar Prasad; V. Paul Pauca; Robert J. Plemmons; Todd C. Torgersen; Joseph van der Gracht
The insertion of a suitably designed phase plate in the pupil of an imaging system makes it possible to encode the depth dimension of an extended three-dimensional scene by means of an approximately shift-invariant PSF. The so-encoded image can then be deblurred digitally by standard image recovery algorithms to recoup the depth dependent detail of the original scene. A similar strategy can be adopted to compensate for certain monochromatic aberrations of the system. Here we consider two approaches to optimizing the design of the phase plate that are somewhat complementary - one based on Fisher information that attempts to reduce the sensitivity of the phase encoded image to misfocus and the other based on a minimax formulation of the sum of singular values of the system blurring matrix that attempts to maximize the resolution in the final image. Comparisons of these two optimization approaches are discussed. Our preliminary demonstration of the use of such pupil-phase engineering to successfully control system aberrations, particularly spherical aberration, is also presented.
visual information processing conference | 2004
Joseph van der Gracht; V. Paul Pauca; Harsha Setty; Ramkumar Narayanswamy; Robert J. Plemmons; Sudhakar Prasad; Todd C. Torgersen
Automated iris recognition is a promising method for noninvasive verification of identity. Although it is noninvasive, the procedure requires considerable cooperation from the user. In typical acquisition systems, the subject must carefully position the head laterally to make sure that the captured iris falls within the field-of-view of the digital image acquisition system. Furthermore, the need for sufficient energy at the plane of the detector calls for a relatively fast optical system which results in a narrow depth-of-field. This latter issue requires the user to move the head back and forth until the iris is in good focus. In this paper, we address the depth-of-field problem by studying the effectiveness of specially designed aspheres that extend the depth-of-field of the image capture system. In this initial study, we concentrate on the cubic phase mask originally proposed by Dowski and Cathey. Laboratory experiments are used to produce representative captured irises with and without cubic asphere masks modifying the imaging system. The iris images are then presented to a well-known iris recognition algorithm proposed by Daugman. In some cases we present unrestored imagery and in other cases we attempt to restore the moderate blur introduced by the asphere. Our initial results show that the use of such aspheres does indeed relax the depth-of-field requirements even without restoration of the blurred images. Furthermore, we find that restorations that produce visually pleasing iris images often actually degrade the performance of the algorithm. Different restoration parameters are examined to determine their usefulness in relation to the recognition algorithm.
Biometric technology for human identification. Conference | 2005
Ramkumar Narayanswamy; Paulo E. X. Silveira; Harsha Setty; V. Paul Pauca; Joseph van der Gracht
Iris recognition imaging is attracting considerable interest as a viable alternative for personal identification and verification in many defense and security applications. However current iris recognition systems suffer from limited depth of field, which makes usage of these systems more difficult by an untrained user. Traditionally, the depth of field is increased by reducing the imaging system aperture, which adversely impacts the light capturing power and thus the system signal-to-noise ratio (SNR). In this paper we discuss a computational imaging system, referred to as Wavefront Coded(R) imaging, for increasing the depth of field without sacrificing the SNR or the resolution of the imaging system. This system employs a especially designed Wavefront Coded lens customized for iris recognition. We present experimental results that show the benefits of this technology for biometric identification.
Proceedings of SPIE | 2011
V. Paul Pauca; Michael Forkin; Xiao Xu; Robert J. Plemmons; Arun Ross
Ocular recognition is a new area of biometric investigation targeted at overcoming the limitations of iris recognition performance in the presence of non-ideal data. There are several advantages for increasing the area beyond the iris, yet there are also key issues that must be addressed such as size of the ocular region, factors affecting performance, and appropriate corpora to study these factors in isolation. In this paper, we explore and identify some of these issues with the goal of better defining parameters for ocular recognition. An empirical study is performed where iris recognition methods are contrasted with texture and point operators on existing iris and face datasets. The experimental results show a dramatic recognition performance gain when additional features are considered in the presence of poor quality iris data, offering strong evidence for extending interest beyond the iris. The experiments also highlight the need for the direct collection of additional ocular imagery.
acm southeast regional conference | 2007
Qiang Zhang; Han Wang; Robert J. Plemmons; V. Paul Pauca
Three major objectives in processing hyperspectral image data of an object (target) are data compression, spectral signature identification of constituent materials, and determination of their corresponding fractional abundances. Here we propose a novel approach to processing hyperspectral data using nonnegative tensor factorization (NTF), which reduces a large tensor into three factor matrices, the Khatri-Rao product which approximates the original tensor. This approach preserves physical characteristics of the data such as nonnegativity and can be used to satisfy all three major objectives. Test results are reported for space object identification.
workshop on hyperspectral image and signal processing evolution in remote sensing | 2013
Qiang Zhang; V. Paul Pauca; Robert J. Plemmons; D. Dejan Nikic
Due to lack of direct illumination, objects under shadows often reflect significantly less number of photons into a remote hyperspectral imaging (HSI) sensor, leading to low radiance levels near to or below noise. Attempts to perform object classification based on these observed radiances often produce poor results, grouping pixels under shaded areas as being a part of the same class. By fusing LiDAR and HSI data through a physical model, we develop a simple and efficient illumination correction method to remove the direct illumination component of the observed HSI radiance data. This correction then enables accurate object classification, regardless of whether spectral signatures are exposed directly to sunlight. In addition, methods for estimating the area under shadow and geometric parameters such as direct illumination factor and sky-view factor from LiDAR data are presented.