Rodney L. Miller
Eastman Kodak Company
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rodney L. Miller.
systems man and cybernetics | 1991
James R. Sullivan; Lawrence A. Ray; Rodney L. Miller
The visibility of binary image noise at low-to-medium dot densities of current binary printers, i.e 300-400 dots/in, is not subthreshold. Standard halftoning algorithms such as clustered-dot or dispersed-dot dither produce periodic patterns at these dot densities that are easily visible at normal viewing distances in uniform areas. A novel method of halftoning computer-generated uniform areas that reduces noise visibility by using a database of minimum visual modulation bit patterns is introduced. Applications include all paint-by-numbers prints such as those that offer tint-fill or object highlighting. >
Journal of Electronic Imaging | 1997
Kevin E. Spaulding; Rodney L. Miller; Jay S. Schildkraut
Blue-noise dither halftoning methods have been found to produce images with pleasing visual characteristics. Results similar to those generated with error-diffusion algorithms can be obtained using an image processing algorithm that is computationally much simpler to implement. The various techniques that have been used to design blue-noise dither matrices are reviewed and compared. In particular, a series of visual cost function based methods and several techniques that involve designing the dither matrices by analyzing the spatial dot distribution are discussed. Ways to extend the basic blue-noise dither techniques to multilevel and color output devices are also described, including recent advances in the design of jointly optimized color blue-noise dither matrices.
Journal of Electronic Imaging | 1999
Kevin E. Spaulding; Rodney L. Miller
A method to halftone an image using a set of dither bitmaps designed to minimize the visibility of halftone patterns is reported. These dither bitmaps for different gray levels are partially correlated with each other with an associated correlation interval. As a result, the halftone patterns for each gray level are more optimal than those associated with the fully correlated dither bitmap techniques, without the introduction of objectionable artifacts that are associated with the fully uncorrelated dither bitmaps approach. Implementation details are given as well as the experimental results.
signal processing systems | 2010
David D. Conge; Mrityunjay Kumar; Rodney L. Miller; Jiebo Luo; Hayder Radha
As displays become cheaper and are incorporated into more and more devices, there has been an increased focus on image resizing techniques to fill an image to an arbitrary screen size. Traditional methods such as cropping or resampling can introduce undesirable losses in information or distortion in perception. Recently, content-aware image retargeting methods have been proposed ([1][2][4][6][7]) which produce exceptional results. In particular, seam carving, proposed by Avidan and Shamir, has gained attention as an effective solution. However, there are many cases where it can fail. In this paper we propose an improved seam carving algorithm which incorporates antialiasing and thresholding techniques. Experiments have demonstrated superior performance over the current seam carving methods.
Journal of Electronic Imaging | 1999
Qing Yu; Kevin J. Parker; Kevin E. Spaulding; Rodney L. Miller
Multilevel halftoning (multitoning) is an extension of bi- tonal halftoning, in which the appearance of intermediate tones is created by the spatial modulation of more than two tones, i.e., black, white, and one or more shades of gray. In this paper, the conven- tional multitoning approach and a previously proposed approach, both using stochastic screen dithering, are investigated. A human visual model is employed to measure the perceived halftone error for both algorithms. The performance of each algorithm at gray lev- els near the printers intermediate output levels is compared. Based on this study, a new overmodulation algorithm is proposed. The multitone output is mean preserving with respect to the input and the new algorithm requires little additional computation. It will be shown that, with this simple overmodulation scheme, we will be able to manipulate the dot patterns around the intermediate output levels to achieve desired halftone patterns. Implementation issues related to optimal output level selection and inkjet-printing simulation for this new scheme will also be reported.
IS&T/SPIE's Symposium on Electronic Imaging: Science and Technology | 1993
Rodney L. Miller; Craig Smith
In multilevel halftoning, the appearance of intermediate shades of gray is created by the spatial modulation of more than two tones, i.e., black, white, and one or more gray tones. Periodic multilevel halftoning can be implemented similarly to bitonal halftoning by using N-1 identically sized threshold matrices for N available output levels. The amount of modulation in the output image is dependent on both the number of output levels and the spatial arrangement of threshold values. A method is presented for assessing the modulation resulting from a periodic multilevel halftone algorithm. The method is based on the constraint that the digital output of the halftone process be mean-preserving with respect to the input. This constraint is applied to the tone transfer functions of each pixel in the halftone cell, producing the result that the sum of the derivatives of all the unquantized tone transfer functions must equal the number of pixels in the halftone cell for all input values. This rule leads to a simple graphical technique for evaluating the modulation in a halftone algorithm as well as suggesting an infinite number of ways to vary the modulation. The application of this method to traditional as well as alternate halftone architectures is discussed.
ACM Transactions on Graphics | 2012
Sen Wang; Tingbo Hou; John Border; Hong Qin; Rodney L. Miller
Image deblurring has been a very challenging problem in recent decades. In this article, we propose a high-quality image deblurring method with a novel image prior based on a new imaging system. The imaging system has a newly designed sensor pattern achieved by adding panchromatic (pan) pixels to the conventional Bayer pattern. Since these pan pixels are sensitive to all wavelengths of visible light, they collect a significantly higher proportion of the light striking the sensor. A new demosaicing algorithm is also proposed to restore full-resolution images from pixels on the sensor. The shutter speed of pan pixels is controllable to users. Therefore, we can produce multiple images with different exposures. When long exposure is needed under dim light, we read pan pixels twice in one shot: one with short exposure and the other with long exposure. The long-exposure image is often blurred, while the short-exposure image can be sharp and noisy. The short-exposure image plays an important role in deblurring, since it is sharp and there is no alignment problem for the one-shot image pair. For the algorithmic aspect, our method runs in a two-step maximum-a-posteriori (MAP) fashion under a joint minimization of the blur kernel and the deblurred image. The algorithm exploits a combined image prior with a statistical part and a spatial part, which is powerful in ringing controls. Extensive experiments under various conditions and settings are conducted to demonstrate the performance of our method.
signal processing systems | 2011
Mrityunjay Kumar; David D. Conger; Rodney L. Miller; Jiebo Luo; Hayder Radha
As displays become less expensive and are incorporated into more and more devices, there has been an increased focus on image resizing techniques to fill an image to an arbitrary screen size. Traditional methods such as cropping or resampling can introduce undesirable losses in information or distortion in perception. Recently, content-aware image retargeting methods have been proposed (Avidan and Shamir, ACM Trans Graphics 26(3), 2007; Guo et al., IEEE Trans Multimedia 11(5):856–867, 2009; Shamir and Avidan, Commun ACM 52(1), 2009; Simakov et al. 2008; Wolf et al. 2007), which produce exceptional results. In particular, seam carving, proposed by Avidan and Shamir, has gained attention as an effective solution. However, there are many cases where it can fail. In this paper we propose a distortion-sensitive seam carving algorithm for content-aware image resizing that improves edge preservation and decreases aliasing artifacts. In the proposed approach, we use local gradient information along with a thresholding technique to guide the seam selection process and provide a mechanism to halt seam carving when further processing would introduce unacceptable visual distortion in the resized image. Furthermore, anti-aliasing filter is used to reduce the aliasing artifacts caused by seam removal. Experiments have demonstrated superior performance over the current seam carving methods.
international conference on image processing | 2010
Tingbo Hou; Sen Wang; Hong Qin; Rodney L. Miller
The natural image prior has been proven to be a powerful tool for image deblurring in recent years, though its performance against noise in various applications has not been thoroughly studied. In this paper, we present a multigrid natural image prior for image deconvolution that enhances its robustness against noise, and afford three applications of image deconvolution using this prior: deblurring, super-resolution, and denoising. The prior is based on a remarkable property of natural images that derivatives with different resolutions are subject to the same heavy-tailed distribution with a spatial factor. It can serve in both blind and non-blind deconvolutions. The performances of the proposed prior in different applications are demonstrated by corresponding experimental results.
international conference on acoustics, speech, and signal processing | 2010
Mrityunjay Kumar; Rodney L. Miller
In this paper an image fusion approach is proposed for denoising digital images corrupted with signal-independent and signal-dependent noise. In the proposed approach, multiple captures of the same scene of interest are acquired and fused to estimate the original, noise-free image. This approach is motivated by the fact that noise is random in nature; hence, its interaction with the pixels will change with each capture, which in turn can be exploited for denoising purposes. In order to fuse multiple captures, a local affine model is developed to relate these captures and the corresponding original image. Furthermore, total variation (TV) regularization, which preserves discontinuity and is robust to noise, is used to solve the local affine fusion model iteratively to estimate the original image. While the proposed approach requires multiple captures, it is still computationally very fast and the quality of the denoised images clearly indicates the feasibility of the proposed approach.