Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zehan Wang is active.

Publication


Featured researches published by Zehan Wang.


computer vision and pattern recognition | 2017

Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

Christian Ledig; Lucas Theis; Ferenc Huszár; Jose Caballero; Andrew Cunningham; Alejandro Acosta; Andrew P. Aitken; Alykhan Tejani; Johannes Totz; Zehan Wang; Wenzhe Shi

Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.


computer vision and pattern recognition | 2016

Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network

Wenzhe Shi; Jose Caballero; Ferenc Huszár; Johannes Totz; Andrew P. Aitken; Rob Bishop; Daniel Rueckert; Zehan Wang

Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.


Cerebral Cortex | 2014

Whole-Brain Mapping of Structural Connectivity in Infants Reveals Altered Connection Strength Associated with Growth and Preterm Birth

Anand Pandit; Emma C. Robinson; Paul Aljabar; Gareth Ball; Ioannis S. Gousias; Zehan Wang; Jo Hajnal; Daniel Rueckert; Serena J. Counsell; Giovanni Montana; Alexander D. Edwards

Cerebral white-matter injury is common in preterm-born infants and is associated with neurocognitive impairments. Identifying the pattern of connectivity changes in the brain following premature birth may provide a more comprehensive understanding of the neurobiology underlying these impairments. Here, we characterize whole-brain, macrostructural connectivity following preterm delivery and explore the influence of age and prematurity using a data-driven, nonsubjective analysis of diffusion magnetic resonance imaging data. T1- and T2-weighted and -diffusion MRI were obtained between 11 and 31 months postconceptional age in 49 infants, born between 25 and 35 weeks postconception. An optimized processing pipeline, combining anatomical, and tissue segmentations with probabilistic diffusion tractography, was used to map mean tract anisotropy. White-matter tracts where connection strength was related to age of delivery or imaging were identified using sparse-penalized regression and stability selection. Older children had stronger connections in tracts predominantly involving frontal lobe structures. Increasing prematurity at birth was related to widespread reductions in connection strength in tracts involving all cortical lobes and several subcortical structures. This nonsubjective approach to mapping whole-brain connectivity detected hypothesized changes in the strength of intracerebral connections during development and widespread reductions in connectivity strength associated with premature birth.


Medical Image Analysis | 2015

Discriminative dictionary learning for abdominal multi-organ segmentation.

Tong Tong; Robin Wolz; Zehan Wang; Qinquan Gao; Kazunari Misawa; Michitaka Fujiwara; Kensaku Mori; Joseph V. Hajnal; Daniel Rueckert

An automated segmentation method is presented for multi-organ segmentation in abdominal CT images. Dictionary learning and sparse coding techniques are used in the proposed method to generate target specific priors for segmentation. The method simultaneously learns dictionaries which have reconstructive power and classifiers which have discriminative ability from a set of selected atlases. Based on the learnt dictionaries and classifiers, probabilistic atlases are then generated to provide priors for the segmentation of unseen target images. The final segmentation is obtained by applying a post-processing step based on a graph-cuts method. In addition, this paper proposes a voxel-wise local atlas selection strategy to deal with high inter-subject variation in abdominal CT images. The segmentation performance of the proposed method with different atlas selection strategies are also compared. Our proposed method has been evaluated on a database of 150 abdominal CT images and achieves a promising segmentation performance with Dice overlap values of 94.9%, 93.6%, 71.1%, and 92.5% for liver, kidneys, pancreas, and spleen, respectively.


medical image computing and computer assisted intervention | 2014

Geodesic Patch-Based Segmentation

Zehan Wang; Kanwal K. Bhatia; Ben Glocker; Antonio de Marvao; Tim Dawes; Kazunari Misawa; Kensaku Mori; Daniel Rueckert

Label propagation has been shown to be effective in many automatic segmentation applications. However, its reliance on accurate image alignment means that segmentation results can be affected by any registration errors which occur. Patch-based methods relax this dependence by avoiding explicit one-to-one correspondence assumptions between images but are still limited by the search window size. Too small, and it does not account for enough registration error; too big, and it becomes more likely to select incorrect patches of similar appearance for label fusion. This paper presents a novel patch-based label propagation approach which uses relative geodesic distances to define patient-specific coordinate systems as spatial context to overcome this problem. The approach is evaluated on multi-organ segmentation of 20 cardiac MR images and 100 abdominal CT images, demonstrating competitive results.


computer vision and pattern recognition | 2017

Real-Time Video Super-Resolution with Spatio-Temporal Networks and Motion Compensation

Jose Caballero; Christian Ledig; Andrew P. Aitken; Alejandro Acosta; Johannes Totz; Zehan Wang; Wenzhe Shi

Convolutional neural networks have enabled accurate image super-resolution in real-time. However, recent attempts to benefit from temporal correlations in video super-resolution have been limited to naive or inefficient architectures. In this paper, we introduce spatio-temporal sub-pixel convolution networks that effectively exploit temporal redundancies and improve reconstruction accuracy while maintaining real-time speed. Specifically, we discuss the use of early fusion, slow fusion and 3D convolutions for the joint processing of multiple consecutive video frames. We also propose a novel joint motion compensation and video super-resolution algorithm that is orders of magnitude more efficient than competing methods, relying on a fast multi-resolution spatial transformer module that is end-to-end trainable. These contributions provide both higher accuracy and temporally more consistent videos, which we confirm qualitatively and quantitatively. Relative to single-frame models, spatio-temporal networks can either reduce the computational cost by 30% whilst maintaining the same quality or provide a 0.2dB gain for a similar computational cost. Results on publicly available datasets demonstrate that the proposed algorithms surpass current state-of-the-art performance in both accuracy and efficiency.


international conference on machine learning | 2013

Patch-Based Segmentation without Registration: Application to Knee MRI

Zehan Wang; Claire R. Donoghue; Daniel Rueckert

Atlas based segmentation techniques have been proven to be effective in many automatic segmentation applications. However, the reliance on image correspondence means that the segmentation results can be affected by any registration errors which occur, particularly if there is a high degree of anatomical variability. This paper presents a novel multi-resolution patch-based segmentation framework which is able to work on images without requiring registration. Additionally, an image similarity metric using 3D histograms of oriented gradients is proposed to enable atlas selection in this context. We applied the proposed approach to segment MR images of the knee from the MICCAI SKI10 Grand Challenge, where 100 training atlases are provided and evaluation is conducted on 50 unseen test images. The proposed method achieved good scores overall and is comparable to the top entries in the challenge for cartilage segmentation, demonstrating good performance when comparing against state-of-the-art approaches customised to Knee MRI.


MCV'12 Proceedings of the Second international conference on Medical Computer Vision: recognition techniques and applications in medical imaging | 2012

Spatially aware patch-based segmentation (SAPS): an alternative patch-based segmentation framework

Zehan Wang; Robin Wolz; Tong Tong; Daniel Rueckert

Patch-based segmentation has been shown to be successful in a range of label propagation applications. Performing patch-based segmentation can be seen as a k-nearest neighbour problem as the labelling of each voxel is determined according to the distances to its most similar patches. However, the reliance on a good affine registration given the use of limited search windows is a potential weakness. This paper presents a novel alternative framework which combines the use of kNN search structures such as ball trees and a spatially weighted label fusion scheme to search patches in large regional areas to overcome the problem of limited search windows. Our proposed framework (SAPS) provides an improvement in the Dice metric of the results compared to that of existing patch-based segmentation frameworks.


arXiv: Computer Vision and Pattern Recognition | 2016

Is the deconvolution layer the same as a convolutional layer

Wenzhe Shi; Jose Caballero; Lucas Theis; Ferenc Huszár; Andrew P. Aitken; Christian Ledig; Zehan Wang


arXiv: Computer Vision and Pattern Recognition | 2017

Checkerboard artifact free sub-pixel convolution: A note on sub-pixel convolution, resize convolution and convolution resize

Andrew P. Aitken; Christian Ledig; Lucas Theis; Jose Caballero; Zehan Wang; Wenzhe Shi

Collaboration


Dive into the Zehan Wang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge