James T. Chung-How
University of Bristol
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by James T. Chung-How.
international conference on image processing | 2000
James T. Chung-How; David R. Bull
Any real-time interactive video coding algorithm used over the Internet needs to be able to cope with packet loss, since the existing error recovery mechanisms are not suitable for real-time data. In this paper, a robust H.263+ video codec suitable for real-time interactive and multicast Internet applications is proposed. Initially, the robustness to packet loss of H.263 video packetised according to the RTP-H.263+ payload format specifications is assessed. Two techniques are proposed to minimise temporal propagation-selective FEC of the motion information and the use of periodic reference frames. It is shown that when these two techniques are combined, the robustness to loss of H.263+ video is greatly improved.
IEEE Transactions on Vehicular Technology | 2008
Pierre Ferré; Angela Doufexi; James T. Chung-How; Andrew R. Nix; David R. Bull
Home wireless networks are mainly used for data transmission; however, they are now being used in video delivery applications, such as video on demand or wireless internet protocol (IP) television. Off-the-shelf technologies are inappropriate for the delivery of real-time video. In this paper, a packetization method is presented for robust H.264 video transmission over the IEEE 802.11 wireless local area network (WLAN) configured as a wireless home network. To overcome the poor throughput efficiency of the IEEE 802.11 Medium Access Control (MAC), an aggregation scheme with a recovery mechanism is deployed and evaluated via simulation. The scheme maps several IP packets (each containing a single H.264 video packet called a Network Abstraction Layer (NAL) unit) into a single larger MAC frame. Video robustness is enhanced by using small NAL units and by retrieving possible error-free IP packets from the received MAC frame. The required modifications to the legacy MAC are described. Results in terms of throughput efficiency and peak-signal-to-noise ratio (PSNR) are presented for the case of broadcast and real-time transmission applications. Compared to the legacy case, an 80% improvement in throughput efficiency is achieved for a similar PSNR video performance. For fixed physical layer resources, our system provides a 2.5-dB gain in video performance over the legacy case for a similar throughput efficiency. The proposed solution provides considerable robustness enhancement for video transmission over IEEE 802.11-based WLANs.
visual communications and image processing | 2003
Dimitris Agrafiotis; Cedric Nishan Canagarajah; David R. Bull; Matt Dye; Helen Twyford; Jim Kyle; James T. Chung-How
The imminent arrival of mobile video telephony will enable deaf people to communicate - as hearing people have been able to do for a some time now - anytime/anywhere in their own language sign language. At low bit rates coding of sign language sequences is very challenging due to the high level of motion and the need to maintain good image quality to aid with understanding. This paper presents optimised coding of sign language video at low bit rates in a way that will favour comprehension of the compressed material by deaf users. Our coding suggestions are based on an eye-tracking study that we have conducted which allows us to analyse the visual attention of sign language viewers. The results of this study are included in this paper. Analysis and results for two coding methods, one using MPEG-4 video objects and the second using foveation filtering are presented. Results with foveation filtering are very promising, offering a considerable decrease in bit rate in a way which is compatible with the visual attention patterns of deaf people, as these were recorded in the eye tracking study.
EURASIP Journal on Advances in Signal Processing | 2008
Pierre Ferré; James T. Chung-How; David R. Bull; Andrew R. Nix
Wireless local area networks (WLANs) such as IEEE 802.11a/g utilise numerous transmission modes, each providing different throughputs and reliability levels. Most link adaptation algorithms proposed in the literature (i) maximise the error-free data throughput, (ii) do not take into account the content of the data stream, and (iii) rely strongly on the use of ARQ. Low-latency applications, such as real-time video transmission, do not permit large numbers of retransmission. In this paper, a novel link adaptation scheme is presented that improves the quality of service (QoS) for video transmission. Rather than maximising the error-free throughput, our scheme minimises the video distortion of the received sequence. With the use of simple and local rate distortion measures and end-to-end distortion models at the video encoder, the proposed scheme estimates the received video distortion at the current transmission rate, as well as on the adjacent lower and higher rates. This allows the system to select the link-speed which offers the lowest distortion and to adapt to the channel conditions. Simulation results are presented using the MPEG-4/AVC H.264 video compression standard over IEEE 802.11g. The results show that the proposed system closely follows the optimum theoretic solution.
Signal Processing-image Communication | 2001
James T. Chung-How; David R. Bull
Abstract The Internet was designed mainly for non-real-time data, and the existing error recovery mechanisms cannot be used for time critical applications because of their latency. Therefore, any real-time interactive application will need to be robust to packet losses caused mainly by congestion at routers, and this is especially true for video transmissions. In this paper, the problem of robust H.263+ video over the Internet is addressed. The effect of packet loss on H.263 (version 2) video transmitted over the Internet using the real-time transport protocol (RTP) is assessed and different packetisation strategies are also compared. The main problem is the temporal propagation of errors resulting from packet losses because of the motion compensation process. Two ways of minimising this propagation are proposed: the selective use of FEC on the motion information and the use of periodic reference frames, protected with FEC as well. The main advantage of these two techniques is that they do not introduce more than one frame delay, and they do not rely on retransmissions. When combined together, these are shown to perform better than using periodic intraframes at minimising error propagation. Our robust H.263+ encoder has been integrated into vic , a videoconferencing tool widely used over the multicast backbone (MBone) of the Internet.
visual communications and image processing | 2004
Tuan-Kiang Chiew; James T. Chung-How; David R. Bull; Cedric Nishan Canagarajah
This paper proposes a low-complexity sub-pixel refinement to motion estimation based on full-search block matching algorithm (BMA) at integer-pixel accuracy. This algorithm eliminates the need to produce interpolated reference frames, which is may be too memory- and processor- intensive, for some real-time mobile applications. The algorithm assumes the BMA is done at pixel resolution and the (sum-of-absolute-differences) SADs of the candidate motion vector and its neighbouring vectors are available for each block. The proposed method than models the SAD distribution around the candidate motion vector and its neighbouring points. Actual minimum point at sub-pixel resolution is then computed according to the model used. 3 variations of the parabolic model are considered and simulations using the H.263 standard encoder on several test sequences reveal an improvement of 1.0 dB over integer-accuracy motion estimation. Albeit its simplicity, some test cases come close to the results obtained by actual interpolated reference frames.
international conference on consumer electronics | 2002
Tuan-Kiang Chiew; James T. Chung-How; David R. Bull; C. Nishan Canagarajah
This paper proposes a novel low-complexity block-based global motion estimation algorithm for real-time digital video processing applications. The algorithm involves an 8/spl times/8 block-based local motion estimation followed by a least square method to obtain the global motion vector parameters. The translation-zoom global motion model with 3 parameters was adopted and least square method computation is done after removing outliers. Blocks are considered outliers if: (i) the block is over-cluttered or (ii) the reference block has too little activity for an accurate motion vector to be estimated. The algorithm is used to estimate the global motion of 3 QCIF test streams at 10 fps and the parameters are used in 5 typical applications Respective simulations have produced favorable results emphasizing the usefulness of the proposed algorithm.
2010 18th International Packet Video Workshop | 2010
Robert Stapenhurst; Dimitris Agrafiotis; James T. Chung-How; Jon Pledge
In this paper we present a windowed rate adaptation scheme for transmitting delay-constrained video over a wireless channel. We consider an H.264 encoder with HRD-compliant rate control and a generic 802.11-like radio. The scheme assumes only loose coupling between system components which improves implementation feasibility at the cost of some performance. To demonstrate the validity of our scheme, we perform simulations using channel traces collected with 802.11n radio hardware. We compare both simple and ideal implementations of our scheme with an ideal non-adaptive system. Our scheme is shown to offer improvements in PSNR with significant benefit to be obtained from a more advanced implementation.
international symposium on circuits and systems | 1999
James T. Chung-How; David R. Bull
Most current image and video coding standards use variable length codes to achieve compression, which renders the compressed bitstream very sensitive to channel errors. In this paper, image and video coders based on Pyramid Vector Quantisation (PVQ) and using only fixed length codes are proposed. Still image coders using PVQ in conjunction with DCT and wavelet techniques are described and their robustness to random channel errors are investigated. This work is then extended to video coding. A novel fixed rate motion compensated wavelet/PVQ video coder suitable for low bit-rate applications and generating only fixed length codewords is presented. Its compression performance and robustness to random bit errors is compared with H.263.
international conference on consumer electronics | 2009
Pierre Ferré; Paul R. Hill; David Halls; James T. Chung-How; Andrew R. Nix; David R. Bull
VISUALISE aims to enhance the spectator experience at temporally and spatially dispersed public events by providing real-time access to unfolding events via wireless IP enabled mobile phones and personal digital assistants. The VISUALISE project enhanced coverage of the World Rally Championships in South Wales (UK) in December 2007. Using consumer handheld devices, wireless technology was deployed to enable spectators to access live video feeds (including in-car footage), live timing data and live GPS information. Live multi-channel video was streamed to mobile terminals using a broadcast WiFi protocol. While this allowed many terminals to receive the service, enhanced IP encapsulation and robust video playback (including loss concealment) was required at the server and client respectively. The VISUALSIE project also developed a novel location based approach for intelligent, automatic and personalised video-feed selection.