Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John F. Arnold is active.

Publication


Featured researches published by John F. Arnold.


IEEE Transactions on Circuits and Systems for Video Technology | 2000

A cell-loss concealment technique for MPEG-2 coded video

Jian Zhang; John F. Arnold; Michael R. Frater

Audio-visual and other multimedia services are seen as important sources of traffic for future telecommunication networks, including wireless networks. A major drawback with some wireless networks is that they introduce a significant number of transmission errors into the digital bitstream. For video, such errors can have the effect of degrading the quality of service to the point where it is unusable. We introduce a technique that allows for the concealment of the impact of these errors. Our work is based on MPEG-2 encoded video transmitted over a wireless network whose data structures are similar to those of asynchronous transfer mode (ATM) networks. Our simulations include the impact of the MPEG-2 systems layer and cover cell-loss rates up to 5%. This is substantially higher than those that have been discussed in the literature up to this time. We demonstrate that our new approach can significantly increase received video quality, but at the cost of a considerable computational overhead. We then extend our technique to allow for higher computational efficiency and demonstrate that a significant quality improvement is still possible.


IEEE Transactions on Geoscience and Remote Sensing | 1997

The lossless compression of AVIRIS images by vector quantization

Michael J. Ryan; John F. Arnold

The structure of hyperspectral images reveals spectral responses that would seem ideal candidates for compression by vector quantization. This paper outlines the results of an investigation of lossless vector quantization of 224-band Airborne/Visible Infrared imaging Spectrometer (AVIRIS) images. Various vector formation techniques are identified and suitable quantization parameters are investigated. A new technique, mean-normalized vector quantization (M-NVQ), is proposed which produces compression performances approaching the theoretical minimum compressed image entropy of 5 bits/pixel. Images are compressed from original image entropies of between 8.28 and 10.89 bits/pixel to between 4.83 and 5.90 bits/pixel.


International Journal of Remote Sensing | 1996

Reliably estimating the noise in AVIRIS hyperspectral images

R. E. Roger; John F. Arnold

Abstract A new method is presented for computing the noise affecting each band of an AVIRIS hyperspectral image. Between-band (spectral) and within-band (spatial) correlations are used to decorrelate the image data via linear regression. Each band of the image is divided into small blocks, each of which is independently decorrelated. The decorrelation leaves noise-like residuals whose variance estimates the noise. A homogeneous set of these variances is selected and their values are combined to provide the best estimate of that bands noise. This method provides consistent noise estimates from images with very different land cover types. Its performance is validated by comparing its noise estimates with noise measures provided with two AVIRIS images. The method works well with inhomogeneous images (e.g., of a vegetated area such as Jasper Ridge) unlike a method described recently by Gao. The method is automatic and does not require the intervention of a human operator. Noise estimates are presented for 10...


IEEE Transactions on Image Processing | 1994

A perceptually efficient VBR rate control algorithm

Mark R. Pickering; John F. Arnold

This paper describes a rate control algorithm for a variable bit-rate (VBR) video coder. The algorithm described varies the quantizer step size of the coder according to properties of an image sequence that affect the perception of errors. The algorithm also limits the output bit-rate of the coder without the use of buffers to more efficiently use network bandwidth. It is shown that a VBR encoder using this algorithm will provide decoded image sequences with a consistent perceived quality that is comparable with, or better than, the perceived quality of images coded with a CBR encoder.


IEEE Transactions on Circuits and Systems for Video Technology | 1994

A new statistical model for traffic generated by VBR coders for television on the broadband ISDN

Michael R. Frater; John F. Arnold; Patrick Tan

Many techniques have been proposed for modeling source rates generated by variable bit rate video coders. However, little work has been done to verify these models, especially in the area of measuring the quality of their predictions for network performance and the grade of service experienced by a customer. In this paper, we summarize the previous work in this area, highlight some of the difficulties in effectively verifying these models, and suggest methods by which source rate models for VBR video might be more effectively verified in the future. In addition to studying the problem of model verification, a new model is proposed. Using this model, a video sequence is characterized by four parameters. As in previous cases, the number of bits generated by the coder for a sequence of video frames is modelled. The new model differs from previous models in that this sequence is not Markovian. >


IEEE Transactions on Circuits and Systems for Video Technology | 2000

Efficient drift-free signal-to-noise ratio scalability

John F. Arnold; M.R. Fracter; Yaqiang Wang

Signal-to-noise ratio (SNR) scalability has been incorporated into the MPEG-2 video-coding standard to allow for the delivery of two services with the same spatial and temporal resolution but different levels of quality. In this paper, we begin by reviewing the performance of a single-loop SNR scalable encoder that is compliant with the MPEG-2 standard and demonstrate that its performance is limited by drift in the base layer. We also look at an alternative single-loop drift-free noncompliant SNR scalable encoder, but discover that its coding efficiency is poor. We then review the performance of an MPEG-compliant two-loop SNR scalable encoder. Finally, we propose a new two-loop noncompliant encoder which achieves improved coding performance at the expense of some increase in encoder and decoder complexity.


Remote Sensing of Environment | 1997

Lossy compression of hyperspectral data using vector quantization

Michael J. Ryan; John F. Arnold

Abstract Efficient compression techniques are required for the coding of hyperspectral data. Lossless compression is required in the transmission and storage of data within the distribution .system. Lossy techniques have a role in the initial analysis of hyperspectral data where large quantities of data are evaluated to select smaller areas for more detailed evaluation. Central to lossy compression is the developmgent of a suitable distortion measure, and this work discusses the applicability of extant measures in video coding to the compression of hyperspectral imagery. Criteria for a remote sensing distortion measure are developed and suitable distortion measures are discussed. One measure [the percentage maximum absolute distortion (PMAD) measure] is considered to be a suitable candidate for application to remotely sensed images. The effect of lossy compression is then investigated on the maximum likelihood classification of hyperspectral images, both directly on the original reconstructed data and on features extracted by the decision boundary feature extraction (DBFE) technique. The effect of the PMAD measure is determined on the classification of an image reconstructed with varying degrees of distortion. Despite some anomalies caused by challenging discrimination tasks, the classification accuracy of both the total image and its constituent classes remains predictable as the level of distortion increases. Although total classification accuracy is reduced from 96.8% for the original image to 82.8% for the image compressed with 4% PMAD, the loss in accuracy is not significant (less that 8%)for most classes other than those that present a challenging classification problem. Yet the compressed image is 1/17 the size of the original.


IEEE Journal on Selected Areas in Communications | 1997

MPEG 2 video services for wireless ATM networks

Jian Zhang; Michael R. Frater; John F. Arnold; Terence M. Percival

Audio-visual and other multimedia services are seen as an important source of traffic for future telecommunications networks, including wireless networks. We examine the impact of the properties of a 50 Mb/s asynchronous transfer mode (ATM)-based wireless local-area network (WLAN) on Moving Picture Experts Group phase 2 (MPEG 2) compressed video traffic, with emphasis on the networks error characteristics. The paper includes a description of the WLAN system used and its loss characteristics, a brief discussion of relevant aspects of the MPEG 2 standards and the associated error resilience techniques for minimizing the effect of transmission errors, and a description of the method by which the video data is organized for transmission on the network. We show results on the effect of cell loss due to transmission errors on the quality of the decoded video at the receiver, and demonstrate how error resilience techniques in both the systems and video layers of MPEG 2 can be used to improve the quality of service. Situations where up to 1% of the data is lost due to network transmission errors are examined. Most important among the findings are that error resilience experiments that do not take into account the effect of the MPEG 2 systems layer will tend to significantly overestimate the quality of received video, and that the error resilience techniques provided within the MPEG 2 standard are not sufficient to provide acceptable quality with acceptable overheads, but that this quality can be significantly increased by the addition of a small number of simple techniques.


IEEE Transactions on Geoscience and Remote Sensing | 1994

Reversible image compression bounded by noise

R. E. Roger; John F. Arnold

Reversible image compression rarely achieves compression ratios larger than about 3:1. An explanation of this limit is offered, which hinges upon the additive noise the sensor introduces into the image. Simple models of this noise allow lower bounds on the bit rate to be estimated from sensor noise parameters rather than from ensembles of typical images. The model predicts that an 8-b single-band image subject to noise with unit standard deviation can be compressed reversibly to no less than 2.0 b/pixel, equivalent to a maximum compression ratio of about 4:1. The model has been extended to multispectral imagery. The Airborne Visible and Infra Red Imaging Spectrometer (AVIRIS) is used as an example, as the noise in its 224 bands is well characterized. The model predicts a lower bound on the bit rate for the compressed data of about 5.5 b/pixel when a single codebook is used to encode all the bands. A separate codebook for each band (i.e., 224 codebooks) reduces this bound by 0.5 b/pixel to about 5.0 b/pixel, but 90% of this reduction is provided by only four codebooks. Empirical results corroborate these theoretical predictions. >


IEEE Transactions on Multimedia | 2009

An Efficient Mode Selection Prior to the Actual Encoding for H.264/AVC Encoder

Manoranjan Paul; Michael R. Frater; John F. Arnold

Many video compression algorithms require decisions to be made to select between different coding modes. In the case of H.264, this includes decisions about whether or not motion compensation is used, and the block size to be used for motion compensation. It has been proposed that constrained optimization techniques, such as the method of Lagrange multipliers, can be used to trade off between the quality of the compressed video and the bit rate generated. In this paper, we show that in many cases of practical interest, very similar results can be achieved with much simpler optimizations. Mode selection by simply minimizing the distortion with motion vectors and header information produces very similar performance to the full constrained optimization, while it reduces the mode selection and over all encoding time by 31% and 12%, respectively. The proposed approach can be applied together with fast motion search algorithms and the mode filtering algorithms for further speed up.

Collaboration


Dive into the John F. Arnold's collaboration.

Top Co-Authors

Avatar

Michael R. Frater

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Mark R. Pickering

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Michael C. Cavenor

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Moyuresh Biswas

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Wee Sun Lee

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

James Macnicol

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Jianxin Wei

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Getian Ye

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Edward H. S. Lo

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Abdullah Al Muhit

University of New South Wales

View shared research outputs
Researchain Logo
Decentralizing Knowledge