Kurt R. Bengtson
Hewlett-Packard
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kurt R. Bengtson.
international conference on image processing | 2013
Yang Lei; Kurt R. Bengtson; Lisa Li; Jan P. Allebach
3D shape reconstruction is one of the most important topics in computer vision due to its wide field of application. Among various technologies, structured light is considered to be one of the most reliable techniques. This paper addresses the problem of finding correspondences in a structured-light 3D shape reconstruction system. The work is based on the 3-symbol binary M-array encoded pattern proposed by Albitar. A new 6-symbol M-array pattern is designed with guaranteed minimum Hamming distance of three among all 3 × 3 windows. Improvements to the decoding algorithm are made, which allows successful identification of most symbols, and which corrects one possible error or missing symbol in each 3 × 3 window. Finally, refinements are made to the algorithm for finding correspondences, including back-projection of the captured structured light pattern to the projector plane to eliminate projective distortion.
2012 Western New York Image Processing Workshop | 2012
Eric Welch; Dorin Patru; Eli Saber; Kurt R. Bengtson
Most image processing algorithms are parallelizable, i.e. the calculation of one pixel does not affect another one. SIMD architectures, including Intels WMMX and SSE and ARMs NEON, can exploit this fact by processing multiple pixels at a time, which can result in significant speedups. This study investigates the use of NEON SIMD instructions for two image processing algorithms. The latter are altered to process four pixels at a time, for which a theoretical speedup factor of four can be achieved. In addition, parts of the original implementation have been replaced with inline functions or modified at assembly code level. Experimental benchmark data shows the actual execution speed to be between two to three times higher than the original reference. These results prove that SIMD instructions can significantly speedup image processing algorithms through proper code manipulations.
Proceedings of SPIE | 2011
Jin-Young Kim; Yung-Yao Chen; Mani Fischer; Omri Shacham; Carl Staelin; Kurt R. Bengtson; Jan P. Allebach
For electrophotographic printers, periodic clustered screens are preferable due to their homogeneous halftone texture and their robustness to dot gain. In traditional periodic clustered-dot color halftoning, each color plane is independently rendered with a different screen at a different angle. However, depending on the screen angle and screen frequency, the final halftone may have strong visible moiré due to the interaction of the periodic structures, associated with the different color planes. This paper addresses issues on finding optimal color screen sets which produce the minimal visible moiré and homogeneous halftone texture. To achieve these goals, we propose new features including halftone microtexture spectrum analysis, common periodicity, and twist factor. The halftone microtexture spectrum is shown to predict the visible moiré more accurately than the conventional moiré-free conditions. Common periodicity and twist factor are used to determine whether the halftone texture is homogeneous. Our results demonstrate significant improvements to clustered-dot screens in minimizing visible moiré and having smooth halftone texture.
Proceedings of SPIE | 2014
Jing Dong; Kurt R. Bengtson; B. F. Robinson; Jan P. Allebach
Most of the 3D capture products currently in the market are high-end and pricey. They are not targeted for consumers, but rather for research, medical, or industrial usage. Very few aim to provide a solution for home and small business applications. Our goal is to fill in this gap by only using low-cost components to build a 3D capture system that can satisfy the needs of this market segment. In this paper, we present a low-cost 3D capture system based on the structured-light method. The system is built around the HP TopShot LaserJet Pro M275. For our capture device, we use the 8.0 Mpixel camera that is part of the M275. We augment this hardware with two 3M MPro 150 VGA (640 × 480) pocket projectors. We also describe an analytical approach to predicting the achievable resolution of the reconstructed 3D object based on differentials and small signal theory, and an experimental procedure for validating that the system under test meets the specifications for reconstructed object resolution that are predicted by our analytical model. By comparing our experimental measurements from the camera-projector system with the simulation results based on the model for this system, we conclude that our prototype system has been correctly configured and calibrated. We also conclude that with the analytical models, we have an effective means for specifying system parameters to achieve a given target resolution for the reconstructed object.
IEEE Transactions on Image Processing | 2016
Seong Jun Park; Mark Q. Shaw; George Kerby; Terry M. Nelson; Di-Yuan Tzeng; Kurt R. Bengtson; Jan P. Allebach
In this paper, we consider a dual-mode process for the electrophotographic laser printer-a low-frequency halftoning for smooth regions and a high-frequency halftoning for detail regions. These regions are described by an object map that is extracted from the page description language version of the document. This manner of switching screens depending on the local content provides a stable halftone without artifacts in smooth areas and preserves the detail rendering in detail or texture areas. However, when switching between halftones with two different frequencies, jaggies may occur along the boundaries between areas halftoned with low- and high-frequency screens. To reduce the jaggies, our screens obey a harmonic relationship. In addition, we implement a blending process based on a transition region. We propose a nonlinear blending process in which at each pixel, we choose the maximum of the two weighted halftones, where the weights vary according to the position in the transition region. Moreover, we describe an online tone-mapping for the boundary blending process, based on an offline calibration procedure that effectively assures the desired tone values within the transition region.
Proceedings of SPIE | 2013
Thanh Huy Ha; Chyuan-Tyng Wu; Peter Majewicz; Kurt R. Bengtson; Jan P. Allebach
The quality of images of objects with significant 3D structure, captured at close range under a flash, may be substantially degraded by glare and shadow regions. In this paper, we introduce an imaging system and corresponding algorithm to address this situation. The imaging system captures three frames of the stationary scene using a single camera in a fixed position, but an illumination source in three different positions, one for each frame. The algorithm includes two processes: shadow detection and image fusion. Through shadow detection, we can locate the area of shadows. After getting the shadow maps, we generate a more complete final image by image fusion. Our experimental results show that in most cases, the shadow and glare are markedly reduced.
Proceedings of SPIE | 2013
Osborn de Lima; Sreenath Rao Vantaram; Sankaranarayanan Piramanayagam; Eli Saber; Kurt R. Bengtson
In this paper, we present an Edge Directed Super Resolution (EDSR) technique for grayscale and color images. The proposed algorithm is a multiple pass iterative algorithm aimed at producing better defined images with sharper edges. The basic premise behind this algorithm is interpolating along the edge direction and thus reducing blur that comes from traditional interpolation techniques which operate across the edge in some instances. To this effect, horizontal and vertical gradients derived from the input reference image resized to the target resolution, are utilized to generate an edge direction map, which in turn is quantized into four discrete directions. The process then utilizes the multiple input images shifted by a sub pixel amount to yield a single higher resolution image. A cross correlation based registration approach determines the relative shifts between the frames. In the case of color images, the edge directed super resolution algorithm is applied to the L channel in the L*a*b* color space, since most of the edge information is concentrated in that channel. The two color difference channels a* and b* are resized to a higher resolution using a conventional bicubic interpolation approach. The algorithm developed was applied to grayscale and color images and showed favorable results on a wide variety of datasets ranging from printing to surveillance, to regular consumer photography.
international conference on image processing | 2015
Chyuan-Tyng Wu; Kurt R. Bengtson; Jan P. Allebach
In this work, we discuss the use of depth information to correct the distortion due to the curved shape of the pages of an open book in captured images. This work is relevant to camera-based capture devices that can use a projector to cast structured light patterns to provide depth information. In order to improve the visual quality of captured documents, we use 3D shape reconstruction methods and geometric rectification to flatten the curvature of an open book. Shading correction is applied to the captured image, as well. Our models exploit specific prior assumptions about the nature of the printed material that is captured. The improvement in captured open book images obtained by using our method can be observed in the included experimental results.
Proceedings of SPIE | 2014
Yue Wang; Osborn de Lima; Eli Saber; Kurt R. Bengtson
In this paper, we present an improved Edge Directed Super Resolution (EDSR) technique to produce enhanced edge definition and improved image quality in the resulting high resolution image. The basic premise behind this algorithm remains, like its predecessor, to utilize gradient and spatial information and interpolate along the edge direction in a multiple pass iterative fashion. The edge direction map generated from horizontal and vertical gradients and resized to the target resolution is quantized into eight directions over a 5 × 5 block compared to four directions over a 3 × 3 block in the previous algorithm. This helps reduce the noise caused in part due to the quantization error and the super resolved results are significantly improved. In addition, an appropriate weighting encompassing the degree of similarity between the quantized edge direction and the actual edge direction is also introduced. In an attempt to determine the optimal super resolution parameters for the case of still image capture, a hardware setup was utilized to investigate and evaluate those factors. In particular, the number of images captured as well as the amount of sub pixel displacement that yield a high quality result was studied. This is done by utilizing a XY stage capable of sub-pixel movement. Finally, an edge preserving smoothing algorithm contributes to improved results by reducing the high frequency noise introduced by the super resolution process. The algorithm showed favorable results on a wide variety of datasets obtained from transportation to multimedia based print/scan application in addition to images captured with the aforementioned hardware setup.
Proceedings of SPIE | 2013
Minwoong Kim; Kurt R. Bengtson; Lisa Li; Jan P. Allebach
The flicker artifact dealt with in this paper is the scanning distortion arising when an image is captured by a digital camera using a CMOS imaging sensor with an electronic rolling shutter under strong ambient light sources powered by AC. This type of camera scans a target line-by-line in a frame. Therefore, time differences exist between the lines. This mechanism causes a captured image to be corrupted by the change of illumination. This phenomenon is called the flicker artifact. The non-content area of the captured image is used to estimate a flicker signal that is a key to being able to compensate the flicker artifact. The average signal of the non-content area taken along the scan direction has local extrema where the peaks of flicker exist. The locations of the extrema are very useful information to estimate the desired distribution of pixel intensities assuming that the flicker artifact does not exist. The flicker-reduced images compensated by our approach clearly demonstrate the reduced flicker artifact, based on visual observation.