Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Graham R. Martin is active.

Publication


Featured researches published by Graham R. Martin.


IEEE Transactions on Circuits and Systems for Video Technology | 2000

Quadtree-structured variable-size block-matching motion estimation with minimal error

Injong Rhee; Graham R. Martin; S. Muthukrishnan; Roger A. Packwood

This paper reports two efficient quadtree-based algorithms for variable-size block matching (VSBM) motion estimation. The schemes allow the dimensions of blocks to adapt to local activity within the image, and the total number of blocks in any frame can be varied while still accurately representing true motion. This permits adaptive bit allocation between the representation of displacement and residual data, and also the variation of the overall bit-rate on a frame-by-frame basis. The first algorithm computes the optimal selection of variable-sized blocks to provide the best-achievable prediction error under the fixed number of blocks for a quadtree-based VSBM technique. The algorithm employs an efficient dynamic programming technique utilizing the special structure of a quadtree. Although this algorithm is computationally intensive, it does provide a yardstick by which the performance of other more practical VSBM techniques can be measured. The second algorithm adopts a heuristic way to select variable-sized square blocks. It relies more on local motion information than on global error optimization. Experiments suggest that the effective use of local information contributes to minimizing the overall error. The result is a more computationally efficient VSBM technique than the optimal algorithm, but with a comparable prediction error.


IEEE Transactions on Circuits and Systems for Video Technology | 2008

Fast Inter-Mode Selection in the H.264/AVC Standard Using a Hierarchical Decision Process

Andy C. Yu; Graham R. Martin; Heechan Park

A complexity reduction algorithm tailored for the H.264/AVC encoder is described. It aims to alleviate the computational burden imposed by Lagrangian rate distortion optimization in the inter-mode selection process. The proposed algorithm is described as a hierarchical structure comprising three levels. Each level targets different types of macroblocks according to the complexity of the search process. Early termination of mode selection is triggered at any of the levels to avoid a full cycle of Lagrangian examination. The algorithm is evaluated using a wide range of test sequences of different classes. The results demonstrate a reduction in encoding time of at least 40%, regardless of the class of sequence. Despite the reduction in computational complexity, picture quality is maintained at all bit rates.


international conference on image processing | 2004

Advanced block size selection algorithm for inter frame coding in H.264/MPEG-4 AVC

Andy C. Yu; Graham R. Martin

A fast inter-mode selection algorithm is proposed to improve the encoder efficiency of the H.264/MPEG-4 AVC standard, but with insignificant degradation in picture quality. The modified fast inter-mode selection (MFInterms) algorithm extends previous work to provide a. more efficient prediction of mode decision. The strategy incorporates temporal similarity detection and the detection of different moving features within a macroblock. Simulation results demonstrate a speed up in encoding time of up to 73% compared with the H.264 benchmark.


Journal of Visual Communication and Image Representation | 2006

Efficient intra- and inter-mode selection algorithms for H.264/ AVC

Andy C. Yu; Ngan King Ngi; Graham R. Martin

H.264/AVC standard is one of the most popular video formats for the next generation video coding. It provides a better performance in compression capability and visual quality compared to any existing video coding standards. Intra-frame mode selection and inter-frame mode selection are new features introduced in the H.264/ AVC standard. Intra-frame mode selection dramatically reduces spatial redundancy in I-frames, while inter-frame mode selection significantly affects the output quality of P-/B-frames by selecting an optimal block size with motion vector(s) or a mode for each macroblock. Unfortunately, this feature requires a myriad amount of encoding time especially when a brute force full-search method is utilised. In this report, we propose fast mode-selection algorithms tailored for both intra-frames and inter-frames. The proposed fast intra-frame mode algorithm is achieved by reducing the computational complexity of the Lagrangian rate-distortion optimisation evaluation. Two proposed fast inter-frame mode algorithms incorporate several robust and reliable predictive factors, including intrinsic complexity of the macroblock, mode knowledge from the previous frame(s), temporal similarity detection and the detection of different moving features within a macroblock, to effectively reduce the number of search operations. Complete and extensive simulations are provided respectively in these two chapters to demonstrate the performances. Finally, we combine our contributions to form two novel fast mode algorithms for H.264/AVC video coding. The simulations on different classes of test sequences demonstrate a speed up in encoding time of up to 86% compared with the H.264/AVC benchmark. This is achieved without any significant degradation in picture quality and compression ratio.


IEEE Transactions on Circuits and Systems for Video Technology | 2013

Fast Mode Decision Algorithm for the H.264/AVC Scalable Video Coding Extension

Xin Lu; Graham R. Martin

A fast mode decision algorithm for efficient implementation of the scalable video coding (SVC) extension of H.264/AVC is described. SVC incorporates interlayer prediction, a new tool that exploits as much lower layer information as possible in order to improve the coding efficiency of the enhancement layer. However, it also greatly increases the computational complexity. A fast mode selection algorithm that exploits the correlation of a macroblock in the enhancement layer and both the colocated macroblocks in the base layer and neighboring macroblocks in the enhancement layer is proposed. The algorithm examines the level of picture details and motion activity, and utilizes the mode information of the base layer to make faster enhancement layer decisions and thus save coding time. Simulation results show that the proposed algorithm reduces encoding by up to 84% compared with the JSVM 9.18 implementation. This is achieved without any noticeable degradation in rate distortion.


Lecture Notes in Computer Science | 2005

Progressive mesh-based motion estimation using partial refinement

Heechan Park; Andy C. Yu; Graham R. Martin

A technique for performing progressive mesh-based motion estimation in a layered fashion is presented. Motion compensation based on image warping provides a block prediction free of block artifacts. The smooth prediction can be used to identify motion-active regions by comparing with the reference frame and generate a partial denser mesh, thus forming layers of mesh. This approach provides a hierarchical partial refinement according to motion activity without additional cost. Experimental results indicate that the technique shows improvement over a single-layered uniform mesh and advantages over block-based techniques, particularly in scalable and very low bitrate video coding.


Signal Processing-image Communication | 2008

Compact representation of contours using directional grid chain code

Heechan Park; Graham R. Martin; Andy C. Yu

An efficient contour-based method for the coding of binary shape information is described. Conventional chain coding techniques show high coding efficiency for lossless compression, but they exploit the coherence of the contour in only a restricted manner. Higher coding efficiency can be achieved by realising the neighbourhood relation as a Markov chain, and this is exploited in a new coding scheme, the directional grid chain coding (DGCC). The method is computationally efficient and the coding process adapts to the inherent changes in the contour. Two schemes are proposed, a lossless and a quasi-lossless method. The lossless scheme achieves 32% saving in bit rate compared with the conventional differential chain code (DCC). The second, quasi-lossless technique achieves 44% bit reduction compared with the DCC and the distortions present in the reconstructed contour are hardly noticeable to the human eye.


real-time systems symposium | 1995

A scalable real-time synchronization protocol for distributed systems

Injong Rhee; Graham R. Martin

A distributed protocol is proposed for the synchronization of real-time tasks that have variable resource requirements. The protocol is simple to implement and is intended for large-scale distributed or parallel systems in which processes communicate by message passing. Critical sections, even when nested, may be executed on any processor. Thus, given an adequate number of processors, the execution of critical sections can be completely distributed. More significantly, since the protocol enables the distributed allocation of critical sections, the benefits of various allocations can be analyzed and the system optimized to provide minimal blocking. This has important application in global optimization techniques for allocating large numbers of hard real-time tasks in multiprocessor systems.


Proceedings. Eighth International Conference on Information Visualisation, 2004. IV 2004. | 2004

Automatic selection of attributes by importance in relevance feedback visualisation

Chee Un Ng; Graham R. Martin

Relevance feedback visualisation (RFV) is a technique developed to visualise the feature values of returned results in a content-based image retrieval system that incorporates relevance feedback. RFV is used also to re-sort retrieved results according to user requirements, enable the interactive investigation of pertinent features and permit the discovery of otherwise unidentifiable trends in the dataset. When large numbers of features are involved, manually determining which feature attribute graphs are the most important can be a burdensome task. In this paper, a method for automatically sorting attribute graphs according to their significance in the search operation is introduced. The result is that features worthy of further investigation are immediately identified, the user interface is improved, and the CBIR system is made more effective.


data compression conference | 2017

Fast Intra Coding Implementation for High Efficiency Video Coding (HEVC)

Xin Lu; Nan Xiao; Graham R. Martin; Yue Hu; Xuesong Jin

In High Efficiency Video Coding (HEVC), a quad-tree based Coding Unit (CU) partitioning scheme is employed, achieving a substantial improvement in coding efficiency compared with previous standards. The superior coding efficiency of HEVC is achieved at the expense of greatly increased complexity. A fast intra coding scheme consisting of fast CU depth decision and fast prediction mode decision is proposed to reduce the computational requirement. Classification of the homogeneity of video content using an adaptive double thresholds scheme is employed to reduce the number of Rate Distortion (RD) evaluations. The partition information of spatially neighbouring CUs is utilised to further narrow the depth range. The construction of the initial candidate list is improved for each Prediction Unit (PU). Then the prediction mode correlation between neighbouring quad-tree coding levels is considered to predict the most likely coding mode. The Hadamard cost of prediction modes is examined to further reduce the candidate modes. The computational complexity of HEVC intra coding is therefore reduced. Simulation results show that the proposed algorithm reduces encoding time by up to 71% compared with the HM 13.0 implementation, while having a negligible impact on rate distortion, with increases in bit-rate of 1.82%.

Collaboration


Dive into the Graham R. Martin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xin Lu

University of Warwick

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nan Xiao

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Xuesong Jin

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yue Hu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhilu Wu

Harbin Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge