Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nantheera Anantrasirichai is active.

Publication


Featured researches published by Nantheera Anantrasirichai.


international conference on image processing | 2014

Robust texture features for blurred images using Undecimated Dual-Tree Complex Wavelets

Nantheera Anantrasirichai; Jeremy F. Burn; David R. Bull

This paper presents a new descriptor for texture classification. The descriptor is rotationally invariant and blur insensitive, which provides great benefits for various applications that suffer from out-of-focus content or involve fast moving or shaking cameras. We employ an Undecimated Dual-Tree Complex Wavelet Transform (UDT-CWT) [1] to extract texture features. As the UDT-CWT fully provides local spatial relationship between scales and subband orientations, we can straightforwardly create bit-planes of the images representing local phases of wavelet coefficients. We also discard some of the finest decomposition levels where are most affected by the blur. A histogram of the resulting code words is created and used as features in texture classification. Experimental results show that our approach outperforms existing methods by up to 40% for synthetic blurs and up to 30% for natural video content due to camera motion when walking.


international symposium on biomedical imaging | 2013

SVM-based texture classification in Optical Coherence Tomography

Nantheera Anantrasirichai; Alin Achim; James Edwards Morgan; Irina Erchova; Lindsay B. Nicholson

This paper describes a new method for automated texture classification for glaucoma detection using high resolution retinal Optical Coherence Tomography (OCT). OCT is a non-invasive technique that produces cross-sectional imagery of ocular tissue. Here, we exploit information from OCT images, specifically the inner retinal layer thickness and speckle patterns, to detect glaucoma. The proposed method relies on support vector machines (SVM), while principal component analysis (PCA) is also employed to improve classification performance. Results show that texture features can improve classification accuracy over what is achieved using only layer thickness as existing methods currently do.


IEEE Transactions on Image Processing | 2013

Atmospheric Turbulence Mitigation Using Complex Wavelet-Based Fusion

Nantheera Anantrasirichai; Alin Achim; Nick G. Kingsbury; David R. Bull

Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full- and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios.


Computerized Medical Imaging and Graphics | 2014

Adaptive-weighted bilateral filtering and other pre-processing techniques for optical coherence tomography ☆

Nantheera Anantrasirichai; Lindsay B. Nicholson; James Edwards Morgan; Irina Erchova; Katharine Eirlys Mortlock; R. V. North; Julie Albon; Alin Achim

This paper presents novel pre-processing image enhancement algorithms for retinal optical coherence tomography (OCT). These images contain a large amount of speckle causing them to be grainy and of very low contrast. To make these images valuable for clinical interpretation, we propose a novel method to remove speckle, while preserving useful information contained in each retinal layer. The process starts with multi-scale despeckling based on a dual-tree complex wavelet transform (DT-CWT). We further enhance the OCT image through a smoothing process that uses a novel adaptive-weighted bilateral filter (AWBF). This offers the desirable property of preserving texture within the OCT image layers. The enhanced OCT image is then segmented to extract inner retinal layers that contain useful information for eye research. Our layer segmentation technique is also performed in the DT-CWT domain. Finally we describe an OCT/fundus image registration algorithm which is helpful when two modalities are used together for diagnosis and for information fusion.


Signal Processing-image Communication | 2015

Undecimated Dual-Tree Complex Wavelet Transforms

Paul R. Hill; Nantheera Anantrasirichai; Alin Achim; Mohammed E. Al-Mualla; David R. Bull

Two undecimated forms of the Dual Tree Complex Wavelet Transform (DT-CWT) are introduced together with their application to image denoising and robust feature extraction. These undecimated transforms extend the DT-CWT through the removal of downsampling of filter outputs together with upsampling of the complex filter pairs in a similar structure to the Undecimated Discrete Wavelet Transform (UDWT).Both developed transforms offer exact translational invariance, improved scale-to-scale coefficient correlation together with the directional selectivity of the DT-CWT. Additionally, within each developed transform, the subbands are of a consistent size. They therefore benefit from a direct one-to-one relationship between co-located coefficients at all scales and therefore this offers consistent phase relationships across scales. These advantages can be exploited within applications such as denoising, image fusion, segmentation and robust feature extraction. The results of two example applications (bivariate shrinkage denoising and robust feature extraction) demonstrate objective and subjective improvements over the DT-CWT. The two novel transforms together with the DT-CWT offer a trade-off between denoising performance, computational efficiency and memory requirements. HighlightsProposed transforms have exact translational invariance.Coefficients have one-to-one cross scale relationships.Improved results for two example applications.Matlab code available at: www.bristol.ac.uk/vi-lab/projects/udtcwt.


international conference on acoustics, speech, and signal processing | 2006

Dynamic Programming for Multi-View Disparity/Depth Estimation

Nantheera Anantrasirichai; Cedric Nishan Canagarajah; David W. Redmill; David R. Bull

A novel algorithm for disparity/depth estimation from multi-view images is presented. A dynamic programming approach with window-based correlation and a novel cost function is proposed. The smoothness of disparity/depth map is embedded in dynamic programming approach, whilst the window-based correlation increases reliability. The enhancement methods are included, i.e. adaptive window size and shiftable window are used to increase reliability in homogenous areas and to increase sharpness at object boundaries. First, the algorithms estimate depth maps along a single camera axis. The algorithms exploits then combines the depth estimates from different axis to derive a suitable depth map for multi-view images. The proposed scheme outperforms existing approaches in parallel and in the non-parallel camera configurations


international conference on acoustics, speech, and signal processing | 2009

Enhanced spatially interleaved DVC using diversity and selective feedback

Nantheera Anantrasirichai; Dimitris Agrafiotis; David R. Bull

Systems with cheap/simple/power efficient encoders but complex decoders make applications such as low cost, low power remote sensors practical. Bandwidth considerations however are still an issue and compression efficiency has to remain high. In this paper, we present a distributed video codec (DVC) that we are developing with the aim of achieving such a low power paradigm at the cost of only a small compression performance deficit relative to the current state of the art, H.264. The proposed system employs spatial interleaving of KEY and Wyner-Ziv data which allows efficient side information (SI) generation through block-based error concealment, a Gray code that increases the accuracy of bit probability estimation, and a diversity scheme that produces more reliable results by exploiting multiple SI generated data. Simulation results show an improvement of the proposed scheme over H.264 intra coding of up to 1.5 dB. We additionally propose two mechanisms for selective parity bit feedback requests that can fizrther reduce the WZ bitrate by up to 15%.


international conference on image processing | 2005

Multi-view image coding with wavelet lifting and in-band disparity compensation

Nantheera Anantrasirichai; Cedric Nishan Canagarajah; David R. Bull

This paper presents a novel framework to achieve scalable multi-view image coding. As open loop operation, the wavelet lifting scheme for geometric filtering has been exploited to overcome the limitation of SNR scalability and to attain view scalability. The essential key for achieving the spatial scalability is the in-band prediction. It removes correlations among subbands level-by-level via shift-invariant references obtained by overcomplete discrete wavelet transforms (ODWT). Additionally, the proposed disparity compensated view filtering is allowed to exploit the different filters and estimation parameters for each resolution level. The experiments show comparable results at full resolution and the significant improvement at coarser resolution over the conventional spatial prediction scheme.


international conference on image processing | 2008

A concealment based approach to distributed video coding

Nantheera Anantrasirichai; Dimitris Agrafiotis; David R. Bull

This paper presents a concealment based approach to distributed video coding that uses hybrid key/WZ frames via an FMO type interleaving of macroblocks. Our motivation stems from a previous work of ours that showed promising results relative to the more common approach of splitting the sequence in key and WZ frames. In this paper, we extend our previous scheme to the case of I-B-P frame structures and transform domain DVC. We additionally introduce a number of enhancements at the decoder including use of spatio-temporal concealment for generating the side information on a MB basis, mode selection for switching between the two concealment approaches and for deciding how the correlation noise is estimated, local (MB wise) correlation noise estimation and modified B frame quantisation. The results presented indicate considerable improvement (up to 30%) compared to corresponding frame extrapolation and frame interpolation schemes.


international conference on image processing | 2016

Fixation identification for low-sample-rate mobile eye trackers

Nantheera Anantrasirichai; Iain D. Gilchrist; David R. Bull

This paper presents a novel method of fixation identification for mobile eye trackers. The most significant benefit of our method over the state-of-the-art is that it achieves high accuracy for low-sample-rate devices worn during locomotion. This in turn delivers higher quality datasets for further use in human behaviour research, robotics and the development of guidance aids for the visually impaired. The proposed method employs temporal characteristics of the eye positions combined with statistical visual features extracted using a deep convolutional neural network, inspired by models of the primate visual system, through the fovea and peripheral areas around the eye positions. The results show that the proposed method outperforms existing methods by up to 16 % in terms of classification accuracy.

Collaboration


Dive into the Nantheera Anantrasirichai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge