Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John M. Libert is active.

Publication


Featured researches published by John M. Libert.


visual communications and image processing | 2000

Video Quality Experts Group: Current Results and Future Directions

Ann Marie Rohaly; Philip J. Corriveau; John M. Libert; Arthur A. Webster; Vittorio Baroncini; John Beerends; Jean-Louis Blin; Laura Contin; Takahiro Hamada; David Harrison; Andries Pieter Hekstra; Jeffrey Lubin; Yukihiro Nishida; Ricardo Nishihara; John C. Pearson; Antonio Franca Pessoa; Neil Pickford; Alexander Schertz; Massimo Visca; Andrew B. Watson; Stefan Winkler

The Video Quality Experts Group (VQEG) was formed in October 1997 to address video quality issues. The group is composed of experts from various backgrounds and affiliations, including participants from several internationally recognized organizations working int he field of video quality assessment. The first task undertaken by VQEG was to provide a validation of objective video quality measurement methods leading to recommendations in both the telecommunications and radiocommunication sectors of the International Telecommunications Union. To this end, VQEG designed and executed a test program to compare subjective video quality evaluations to the predictions of a number of proposed objective measurement methods for video quality in the bit rate range of 768 kb/s to 50 Mb/s. The results of this test show that there is no objective measurement system that is currently able to replace subjective testing. Depending on the metric used for evaluation, the performance of eight or nine models was found to be statistically equivalent, leading to the conclusion that no single model outperforms the others in all cases. The greatest achievement of this first validation effort is the unique data set assembled to help future development of objective models.


Measurement Science and Technology | 2006

Correlation of Topography Measurements of NIST SRM 2460 Standard Bullets by Four Techniques

Jun-Feng Song; Theodore V. Vorburger; Thomas B. Renegar; Hyug-Gyo Rhee; A Zheng; L Ma; John M. Libert; Susan M. Ballou; Benjamin Bachrach; K Bogart

Three optical instruments including an interferometric microscope, a Nipkow disc confocal microscope and a laser scanning confocal microscope and a stylus instrument are used for the measurements of bullet profile signatures of a National Institute of Standards and Technology (NIST) Standard Reference Material (SRM) 2460 standard bullet. The two-dimensional profile signatures are compared with the virtual bullet standard signature established by the same stylus instrument. The bullet signature differences are quantified by the maximum cross-correlation function CCFmax. If the compared signatures were exactly the same, CCFmax would be 100%. Comparison results show close agreement among the four techniques for bullet profile signature measurements. The average CCFmax values are higher than 90%. This supports the possibility of using surface topography techniques for ballistic identifications as an alternative to the current technology based on image comparisons.


Smpte Journal | 1998

Perceptual Effects of Noise in Digital Video Compression

Charles Fenimore; John M. Libert; Stephen Wolf

We present results of subjective viewer assessment of video quality of MPEG-2 compressed video containing wide-band Gaussian noise. The video test sequences consisted of seven test clips (both classical and new materials) to which noise with a peak-signal-to-noise-ratio (PSNR) of from 28 dB to 47 dB was added. We used software encoding and decoding at five bit-rates ranging from 1.8 Mb/s to 13.9 Mb/s. Our panel of 32 viewers rated the difference between the noisy input and the compression-processed output. For low noise levels, the subjective data suggests that compression at higher bit-rates can actually improve the quality of the output, effectively acting like a low-pass filter. We define an objective and a subjective measure of scene criticality (the difficulty of compressing a clip) and find the two measures correlate for our data. For difficult-to-encode material (high criticality), the data suggest that the effects of compression may be less noticeable at mid-level noise, while for easy-to-encode video (low criticality), the addition of a moderate amount of noise to the input led to lower quality scores. This suggests that either the compression process may have reduced noise impairments or a form of masking may occur in scenes that have high levels of spatial detail.


National Institute of Standards and Technology (U.S.) | 2008

Assessing Face Acquisition

Mary F. Theofanos; Brian C. Stanton; Charles L. Sheppard; Ross J. Micheals; John M. Libert; Shahram Orandi

The requirements necessary for taking a successful face picture are fairly straightforward. The camera must be operational, and the subject must be illuminated sufficiently, facing the camera and in focus. Yet, a significant portion of the facial photographs taken at United States ports of entry are unusable for the purposes of automatic face recognition. In this paper, we consider the usability components of the face image capture process that contribute to the relatively high ratio of unusable images collected by United States Visitor and Immigrant Status Indicator Technology (US-VISIT). In addition, we introduce a general evaluation methodology—including the use of a simple image overlay—to quantify various characteristics of face imagery. The experimental context mimicked the point-of-entry environment, but with specific usability enhancements. The collected data suggests that these usability enhancements may be used to improve face image capture with the current equipment. US-VISIT requested that the biometrics usability team at the National Institute of Standards and Technology (NIST) examine the current US-VISIT face image collection process to identify any usability and human factors that may improve the existing face image capture process. As such this study did not address other technologies or technology solutions. This report presents the results of a study that examined five usability and human factors enhancements to the current US-VISIT collection process: 1. the camera should resemble a traditional camera; 2. the camera should click when the picture is taken to provide feedback to the traveler that the picture is being taken; 3. the camera should be used in portrait mode; 4. the operator should be facing the traveler and the monitor while positioning the camera and 5. provide some marking on the floor (such as footprints) to indicate to the traveler where to stand for the photograph. The study was conducted as follows: first we visited and observed a representative operational setting (Dulles Airport) in order to understand the primary users and the context of use. Based on these observations we identified the 5 usability and human factors enhancements enumerated above that may improve the face image capture process. A usability study was designed that mimicked the operational process but incorporated the 5 enhancements and face images were collected from 300 participants. A visual inspection evaluation methodology based on an image overlay was used to quantify the various characteristics of face imagery based on the face image standards. Results from the visual inspection process compared …


human vision and electronic imaging conference | 2000

Video Quality Experts group : The quest for valid objective methods

Philip J. Corriveau; Arthur A. Webster; Ann Marie Rohaly; John M. Libert

Subjective assessment methods have been used reliably for many years to evaluate video quality. They continue to provide the most reliable assessments compared to objective methods. Some issues that arise with subjective assessment include the cost of conducting the evaluations and the fact that these methods cannot easily be used to monitor video quality in real time. Furthermore, traditional, analog objective methods, while still necessary, are not sufficient to measure the quality of digitally compressed video systems. Thus, there is a need to develop new objective methods utilizing the characteristics of the human visual system. While several new objective methods have been developed, there is to date no internationally standardized method. The Video Quality Experts Group (VQEG) was formed in October 1997 to address video quality issues. The group is composed of experts from various backgrounds and affiliations, including participants from several internationally recognized organizations working in the field of video quality assessment. The majority of participants are active in the International Telecommunications Union (ITU) and VQEG combines the expertise and resources found in several ITU Study Groups to work towards a common goal. The first task undertaken by VQEG was to provide a validation of objective video quality measurement methods leading to Recommendations in both the Telecommunications (ITU-T) and Radiocommunication (ITU-R) sectors of the ITU. To this end, VQEG designed and executed a test program to compare subjective video quality evaluations to the predictions of a number of proposed objective measurement methods for video quality in the bit rate range of 768 kb/s to 50 Mb/s. The results of this test show that there is no objective measurement system that is currently able to replace subjective testing. Depending on the metric used for evaluation, the performance of eight or nine models was found to be statistically equivalent, leading to the conclusion that no single model outperforms the others in all cases. The greatest achievement of this first validation effort is the unique data set assembled to help future development of objective models.


Proceedings of SPIE | 2001

Standard illumination source for the evaluation of display measurement methods and instruments

John M. Libert; Paul A. Boynton; Edward F. Kelley; Steven W. Brown; Yoshihiro Ohno; Farshid Manoocheri

A prototype display measurement assessment transfer standard (DMATS) is being developed by the NIST to assist the display industry in standardizing measurement methods used to quantify and specify the performance of electronic display. Designed as an idealized electronic display, the DMATS illumination source emulates photometric and colorimetric measurement problems commonly encountered in measuring electronic displays. NIST will calibrate DMATS units and distribute them to participating laboratories for measurement. Analysis of initial interlaboratory comparison results will provide a baseline assessment of display measurement uncertainties. Also, diagnostic indicators expected to emerge from the data will be used to assist laboratories in correcting deficiencies or in identifying metrology problem areas for further research, such as measurement techniques tailored to new display technologies. This paper describes the design and construction of a prototype DMATS source and preliminary photometric and colorimetric characterization. Also, this paper compares measurements obtained by several instruments under constant environmental conditions and examines the effects of veiling glare on chromaticity measurements.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2018

Mobile Phone Fingerprinting: A Picture is Worth a Thousand Words

Susanne M. Furman; Mary F. Theofanos; John M. Libert; John D. Grantham; Brian C. Stanton

The United States Department of Homeland Security (DHS) relies on the use of biometrics as an important component of its mission to keep America safe. Risks are involved with the current systems that use contact fingerprint technology such as the transmission of pathogens by contacting the scanner. Touchless systems address this risk but also introduce new challenges. Sixty National Institute of Standards and Technology (NIST) employees participated in the study to determine the usability of three mobile phone devices as well as the image quality of the resulting fingerprint images. All participants had previous experience with capturing their prints using a contact device and as a result tried to touch the screen on the mobile phone to capture their fingerprints. All the participants had mobile phones and were aware of the phone’s features including a camera and capture process for the mobile phone devices involved using the phone’s camera to take a photo of fingerprints. We believe that participants did not equate the capture process to taking a photo of their fingers and were using an existing mental model for capturing their fingerprints and as a result they touched the phone’s glass screen. The devices provided little if any or often somewhat confusing instructions to assist the user and little if any feedback regarding the success of the capture. To study the image quality of the prints, we assisted the participants in collecting a set of prints using both the mobile phone devices and the contact field devices. We compared the image quality and the interoperability of the contactless captures with the legacy contact captures. Currently the image quality and interoperability is less than desirable.


Special Publication (NIST SP) - 500-306 | 2016

Certification Pathway for Downsampling 1000 ppi Fingerprint Friction Ridge Imagery to 500 ppi

John M. Libert; John D. Grantham; Craig I. Watson

This puication is avilable ree of carge rom : http://dxorg/10.6028/N IT.S P.0-306 The document describes the procedure by which fingerprint image downsampling procedures will be evaluated with respect to conformance to the NIST guidance for sample rate reduction of 1000 ppi1 friction ridge images to 500 ppi as specified in NIST Special Publication 500-289 [NIST3]. This guidance is to be followed whenever 1000 ppi images are to be prepared for comparison to legacy 500 ppi fingerprint databases for submittal to the FBI’s Next Generation identification system and conformance will ensure that CODECs produce encoded files that retain fidelity to the non-compressed source images within empirically set limits and that such files can be decoded by other conformant JPEG 2000 CODECs maintaining the required fidelity. The document describes the attributes of a set of fingerprint images selected for conformance testing and the rationale for selection of these images based on spatial frequency fidelity of downsampled images to non-compressed images scanned at 500 ppi where downsampling is performed using the Gaussian filter with subsampling method specified in NIST SP 500-289. The document describes the procedure to be followed in having an algorithm tested for conformance, the metrics used to measure conformance, and provides instructions on how to run the protocol and submit results to NIST for evaluation.


electronic imaging | 2004

Projection display metrology at NIST: measurements and diagnostics

Paul A. Boynton; Edward F. Kelley; John M. Libert

With the advent of digital cinema, medical imaging, and other applications, the need to properly characterize projection display systems has become increasingly more crucial. Several standards organizations have developed or are presently developing measurement procedures (including ANSI, IEC, ISO, VESA, and SMPTE). The National Institute of Standards and Technology (NIST) has played an important role by evaluating standards and procedures, developing diagnostics, and providing technical and editorial input, especially where unbiased technical expertise is needed to establish credibility and to investigate measurement problems.


Fourth Oxford Conference on Spectroscopy | 2003

Current projects in display metrology at the NIST flat panel display laboratory

Paul A. Boynton; Edward F. Kelley; John M. Libert

The NIST Flat Panel Display Laboratory (FPDL) is operated through the Display Metrology Project (DMP) of the Electronic Information Technology Group in the Electricity Division of the Electronics and Electrical Engineering Laboratory of NIST. The DMP works to develop and refine measurement procedures in support of ongoing electronic display metrology, and applies the results in the development of national and international standards for flat panel display characterization.

Collaboration


Dive into the John M. Libert's collaboration.

Top Co-Authors

Avatar

John D. Grantham

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Shahram Orandi

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Edward F. Kelley

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Michael D. Garris

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Paul A. Boynton

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Stephen S. Wood

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Brian C. Stanton

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Charles Fenimore

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Mary F. Theofanos

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge