Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffrey Lubin is active.

Publication


Featured researches published by Jeffrey Lubin.


visual communications and image processing | 2000

Video Quality Experts Group: Current Results and Future Directions

Ann Marie Rohaly; Philip J. Corriveau; John M. Libert; Arthur A. Webster; Vittorio Baroncini; John Beerends; Jean-Louis Blin; Laura Contin; Takahiro Hamada; David Harrison; Andries Pieter Hekstra; Jeffrey Lubin; Yukihiro Nishida; Ricardo Nishihara; John C. Pearson; Antonio Franca Pessoa; Neil Pickford; Alexander Schertz; Massimo Visca; Andrew B. Watson; Stefan Winkler

The Video Quality Experts Group (VQEG) was formed in October 1997 to address video quality issues. The group is composed of experts from various backgrounds and affiliations, including participants from several internationally recognized organizations working int he field of video quality assessment. The first task undertaken by VQEG was to provide a validation of objective video quality measurement methods leading to recommendations in both the telecommunications and radiocommunication sectors of the International Telecommunications Union. To this end, VQEG designed and executed a test program to compare subjective video quality evaluations to the predictions of a number of proposed objective measurement methods for video quality in the bit rate range of 768 kb/s to 50 Mb/s. The results of this test show that there is no objective measurement system that is currently able to replace subjective testing. Depending on the metric used for evaluation, the performance of eight or nine models was found to be statistically equivalent, leading to the conclusion that no single model outperforms the others in all cases. The greatest achievement of this first validation effort is the unique data set assembled to help future development of objective models.


electronic imaging | 2003

Robust content-dependent high-fidelity watermark for tracking in digital cinema

Jeffrey Lubin; Jeffrey A. Bloom; Hui Cheng

Forensic digital watermarking is a promising tool in the fight against piracy of copyrighted motion imagery content, but to be effective it must be (1) imperceptibly embedded in high-definition motion picture source, (2) reliably retrieved, even from degraded copies as might result from camcorder capture and subsequent very-low-bitrate compression and distribution on the Internet, and (3) secure against unauthorized removal. No existing watermarking technology has yet to meet these three simultaneous requirements of fidelity, robustness, and security. We describe here a forensic watermarking approach that meets all three requirements. It is based on the inherent robustness and imperceptibility of very low spatiotemporal frequency watermark carriers, and on a watermark placement technique that renders jamming attacks too costly in picture quality, even if the attacker has complete knowledge of the embedding algorithm. The algorithm has been tested on HD Cinemascope source material exhibited in a digital cinema viewing room. The watermark is imperceptible, yet recoverable after exhibition capture with camcorders, and after the introduction of other distortions such as low-pass filtering, noise addition, geometric shifts, and the manipulation of brightness and contrast.


asian conference on computer vision | 2007

Content-based matching of videos using local spatio-temporal fingerprints

Gajinder Singh; Manika Puri; Jeffrey Lubin; Harpreet S. Sawhney

Fingerprinting is the process of mapping content or fragments of it, into unique, discriminative hashes called fingerprints. In this paper, we propose an automated video identification algorithm that employs fingerprinting for storing videos inside its database. When queried using a degraded short video segment, the objective of the system is to retrieve the original video to which it corresponds to, both accurately and in real-time. We present an algorithm that first, extracts key frames for temporal alignment of the query and its actual database video, and then computes spatio-temporal fingerprints locally within such frames, to indicate a content-match. All stages of the algorithm have been shown to be highly stable and reproducible even when strong distortions are applied to the query.


Medical Imaging 1997: Image Perception | 1997

Evaluation of human vision models for predicting human observer performance

Warren B. Jackson; Maya R. Said; David A. Jared; James O. Larimer; Jennifer Gille; Jeffrey Lubin

We demonstrate that human-vision-model-based image quality metrics not only correlate strongly with subjective evaluations of image quality but also with human observer performance on visual recognition tasks. By varying amorphous silicon image system design parameters, the performance of human observers in target identification using the resulting test images was measured, and compared with the target weighted just-noticeable-difference produced by a human vision model applied to the same set of images. The detectability of model observer with the human observer was highly correlated for a wide range of image system design parameters. These results demonstrate that the human vision model can be used to produce human observer performance optimized imaging systems without the need for extensive human trials. The human vision based tumor detectors represent a generalization of channelized Hotelling models to non-linear, perceptually based models.


Signal Processing-image Communication | 2004

Accuracy and cross-calibration of video quality metrics: new methods from ATIS/T1A1

Michael H. Brill; Jeffrey Lubin; Pierre Costa; Stephen Wolf; John C. Pearson

Abstract Video quality metrics (VQMs) have often been evaluated and compared using simple measures of correlation to subjective mean opinion scores from panels of observers. However, this approach does not fully take into account the variability implicit in the observers. We present techniques for determining the statistical resolving power of a VQM, defined as the minimum change in the value of the metric for which subjective test scores show a significant change. Resolving power is taken as a measure of accuracy. These techniques have been applied to the video quality experts group (VQEG) data set and incorporated into the recent Alliance for Telecommunications Industry Solutions (ATIS) Committee T1A1 series of technical reports (TRs), which provide a comprehensive framework for characterizing and validating full-reference VQM. These approved TRs, while not standards, will enable the US telecommunications industry to incorporate VQMs into contracts and tariffs for compressed video distribution. New methods for assessing VQM accuracy and cross-calibrating VQMs are an integral part of the framework. These methods have been applied to two VQMs at this point: peak-signal-to-noise ratio and the version of Sarnoffs just noticeable difference metric (JNDmetrix ® ) tested by VQEG (Rapporteur Q11/12 (VQEG): Final report from the VQEG on the validation of objective models of video quality assessment, June 2000). The framework is readily extensible to additional VQMs.


Proceedings of the 1999 Medical Imaging - Image Perception and Performance | 1999

Visual discrimination model for digital mammography

Jeffrey P. Johnson; Jeffrey Lubin; Elizabeth A. Krupinski; Heidi A. Peterson; Hans Roehrig; Andrew Baysinger

Numerous studies have been conducted to determine experimentally the effects of image processing and display parameters on the diagnostic performance of radiologists. Comprehensive optimization of imaging systems for digital mammography based solely on measurements of reader performance is impractical, however, due to the large number of interdependent variables to be tested. A reliable, efficient alternative is needed to improve the evaluation and optimization of new imaging technologies. The Sarnoff JNDmetrixTM Visual Discrimination Model (VDM) is a computational, just-noticeable difference model of human vision that has been applied successfully to predict performance in various nonmedical detection and rating tasks. To test the applicability of the VDM to specific detection tasks in digital mammography, two observer performance studies were conducted. In the first study, effects of display tone scale and peak luminance on the detectability of microcalcifications were evaluated. The VDM successfully predicted improvements in reader performance for perceptually linearized tone scales and higher display luminances. In the second study, the detectability of JPEG and wavelet compression artifacts was evaluated, and performance ratings were again found to be highly correlated with VDM predictions. These results suggest that the VDM would be useful in the assessment and optimization of new imaging and compression technologies for digital mammography.


Medical Imaging 1996: Physics of Medical Imaging | 1996

X-ray image system design using a human visual model

Warren B. Jackson; Peter Beebee; David A. Jared; David K. Biegelsen; James O. Larimer; Jeffrey Lubin; Jennifer Gille

Because of the complex response of the human visual system, typical measurements of image system quality such as the detective quantum efficiency, mean transfer function, and signal-to- noise ratio cannot always be used to determine conditions for optimal perceptual image quality. Using a model of the human vision system, the ViDEOS/Sarnoff Human Vision Discrimination Model (HVM), this work demonstrates that human vision models provide a promising quantitative measure of image perceptual quality. The model requires an image and a matching reference image in order to determine the perceptual difference between the images at each point. A simple model of a digital amorphous silicon medical x-ray system is used to create the necessary images as a function of various design parameters. The image pairs are then analyzed by the HVM. In all cases the dependence of perceived image quality closely follows measures of image quality as determined by the HVM for many image system design variations. Increasing the detector size actually increases the image quality in the presence of either readout or input noise. The model was also used to optimize the image system for a specific task optimization. As an example, the effect of system design parameters on tumor identification in mammographic images is determined.


Medical Imaging 2002: Image Perception, Observer Performance, and Technology Assessment | 2002

Visual discrimination modeling of lesion detectability

Jeffrey P. Johnson; Jeffrey Lubin; John S. Nafziger; Dev P. Chakraborty

The Sarnoff JNDmetrix visual discrimination model (VDM) was applied to predict human psychophysical performance in the detection of simulated mammographic lesions. Contrast thresholds for the detection of synthetic Gaussian masses on mean backgrounds and simulated mammographic backgrounds were measured in two-alternative, forced-choice (2AFC) trials. Experimental thresholds for 2-D Gaussian signal detection decreased with increasing signal size on mean backgrounds and on 1/f3 filtered noise images presented with identical (paired) backgrounds. For 2AFC presentations of different (unpaired) filtered noise backgrounds, detection thresholds increased with increasing signal diameter, consistent with a decreasing signal-to-noise ratio. Thresholds for mean and paired filtered noise backgrounds were used to calibrate a new low-pass, spatial-frequency channel in the VDM. The calibrated VDM was able to predict accurate detection thresholds for Gaussian signals on mean and paired 1/f3 filtered noise backgrounds. To simulate noise-limited detection thresholds for unpaired backgrounds, an approach is outlined for the development of a VDM-based model observer based on statistical decision theory.


human vision and electronic imaging conference | 2005

Reference-free objective quality metrics for MPEG-coded video

Hui Cheng; Jeffrey Lubin

With the growth of digital video delivery, there is an increasing demand for better and more efficient ways to measure video quality. Most existing video quality metrics are reference-based approaches that are not suitable to measure the video quality perceived by the end user without access to reference videos. In this paper, we propose a reference-free video quality metric for MPEG coded videos. It predicts subjective quality ratings using both reference-free MPEG artifact measures and MPEG system parameters (known or estimated). The advantage of this approach is that it does not need a precise separation of content and artifact or the removal of any artifacts. By exploring the correlations among different artifacts and system parameters, our approach can remove content dependency and achieve an accurate estimate of the subjective ratings.


international conference on image processing | 2002

Accuracy and cross-calibration of video-quality metrics: new methods from ATIS/T1A1

Michael H. Brill; Jeffrey Lubin; Pierre Costa; John C. Pearson

Video quality metrics (VQM) have often been evaluated and compared using simple measures of correlation with observers. This approach does not fully take into account the variability implicit in the observers. We present techniques for determining the statistical resolving power of a VQM, defined as the minimum change in the value of the metric for which subjective test scores show a significant change. These techniques have been applied to the VQEG dataset, and incorporated into the recent ATIS/T1A1 series of technical reports (TR), which provide a comprehensive framework for characterizing and validating full-reference video quality metrics (VQM). These approved TR, while not standards, will enable the USA telecommunications industry to incorporate video quality metrics into contracts and tariffs for compressed video distribution. New methods for assessing VQM accuracy and cross-calibrating VQM are an integral part of the framework. These methods have been applied to two VQM at this point: PSNR and the version of Sarnoffs JNDmetrix tested by VQEG. The framework is readily extensible to additional VQM.

Collaboration


Dive into the Jeffrey Lubin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge