Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Williams is active.

Publication


Featured researches published by David Williams.


IEEE Transactions on Visualization and Computer Graphics | 2008

Volumetric Curved Planar Reformation for Virtual Endoscopy

David Williams; Sören Grimm; Ernesto Coto; Abdul V. Roudsari; Haralambos Hatzakis

Curved Planar Reformation (CPR) has proved to be a practical and widely used tool for the visualization of curved tubular structures within the human body. It has been useful in medical procedures involving the examination of blood vessels and the spine. However, it is more difficult to use it for large, tubular, structures such as the trachea and the colon because abnormalities may be smaller relative to the size of the structure and may not have such distinct density and shape characteristics.Our new approach improves on this situation by using volume rendering for hollow regions and standard CPR for the surrounding tissue. This effectively combines gray scale contextual information with detailed color information from the area of interest. The approach is successfully used with each of the standard CPR types and the resulting images are promising as an alternative to virtual endoscopy.Because the CPR and the volume rendering are tightly coupled, the projection method used has a significant effect on properties of the volume renderer such as distortion and isometry. We describe and compare the different CPR projection methods and how they affect the volume rendering process.A version of the algorithm is also presented which makes use of importance driven techniques; this ensures the users attention is always focused on the area of interest and also improves the speed of the algorithm.


Electronic Notes in Theoretical Computer Science | 2007

Simulating and Compiling Code for the Sequential Quantum Random Access Machine

Rajagopal Nagarajan; Nikolaos Papanikolaou; David Williams

We present the SQRAM architecture for quantum computing, which is based on Knills QRAM model. We detail a suitable instruction set, which implements a universal set of quantum gates, and demonstrate the operation of the SQRAM with Deutschs quantum algorithm. The compilation of high-level quantum programs for the SQRAM machine is considered; we present templates for quantum assembly code and a method for decomposing matrices for complex quantum operations. The SQRAM simulator and compiler are discussed, along with directions for future work.


international conference on high performance computing and simulation | 2013

GPU-ASIFT: A fast fully affine-invariant feature extraction algorithm

Valeriu Codreanu; Feng Dong; Baoquan Liu; Jos B. T. M. Roerdink; David Williams; Po Yang; Burhan Yasar

This paper presents a method that takes advantage of powerful graphics hardware to obtain fully affine-invariant image feature detection and matching. The chosen approach is the accurate, but also very computationally expensive, ASIFT algorithm. We have created a CUDA version of this algorithm that is up to 70 times faster than the original implementation, while keeping the algorithms accuracy close to that of ASIFT. Its matching performance is therefore much better than that of other non-fully affine-invariant algorithms. Also, this approach was adapted to fit the multi-GPU paradigm in order to assess the acceleration potential from modern GPU clusters.


Computers & Graphics | 2014

Parallel centerline extraction on the GPU

Baoquan Liu; Alexandru Telea; Jos B. T. M. Roerdink; Gordon J. Clapworthy; David Williams; Po Yang; Feng Dong; Valeriu Codreanu; Alessandro Chiarini

Centerline extraction is important in a variety of visualization applications including shape analysis, geometry processing, and virtual endoscopy. Centerlines allow accurate measurements of length along winding tubular structures, assist automatic virtual navigation, and provide a path-planning system to control the movement and orientation of a virtual camera. However, efficiently computing centerlines with the desired accuracy has been a major challenge. Existing centerline methods are either not fast enough or not accurate enough for interactive application to complex 3D shapes. Some methods based on distance mapping are accurate, but these are sequential algorithms which have limited performance when running on the CPU. To our knowledge, there is no accurate parallel centerline algorithm that can take advantage of modern many-core parallel computing resources, such as GPUs, to perform automatic centerline extraction from large data volumes at interactive speed and with high accuracy. In this paper, we present a new parallel centerline extraction algorithm suitable for implementation on a GPU to produce highly accurate, 26-connected, one-voxel-thick centerlines at interactive speed. The resulting centerlines are as accurate as those produced by a state-of-the-art sequential CPU method [40], while being computed hundreds of times faster. Applications to fly through path planning and virtual endoscopy are discussed. Experimental results demonstrating centeredness, robustness and efficiency are presented.


Signal Processing-image Communication | 2016

GSWO: A programming model for GPU-enabled parallelization of sliding window operations in image processing

Po Yang; Gordon J. Clapworthy; Feng Dong; Valeriu Codreanu; David Williams; Baoquan Liu; Jos B. T. M. Roerdink; Zhikun Deng

Sliding Window Operations (SWOs) are widely used in image processing applications. They often have to be performed repeatedly across the target image, which can demand significant computing resources when processing large images with large windows. In applications in which real-time performance is essential, running these filters on a CPU often fails to deliver results within an acceptable timeframe. The emergence of sophisticated graphic processing units (GPUs) presents an opportunity to address this challenge. However, GPU programming requires a steep learning curve and is error-prone for novices, so the availability of a tool that can produce a GPU implementation automatically from the original CPU source code can provide an attractive means by which the GPU power can be harnessed effectively. This paper presents a GPU-enabled programming model, called GSWO, which can assist GPU novices by converting their SWO-based image processing applications from the original C/C++ source code to CUDA code in a highly automated manner. This model includes a new set of simple SWO pragmas to generate GPU kernels and to support effective GPU memory management. We have implemented this programming model based on a CPU-to-GPU translator (C2GPU). Evaluations have been performed on a number of typical SWO image filters and applications. The experimental results show that the GSWO model is capable of efficiently accelerating these applications, with improved applicability and a speed-up of performance compared to several leading CPU-to-GPU source-to-source translators.


international conference on parallel processing | 2013

Evaluation of Autoparallelization Toolkits for Commodity GPUs

David Williams; Valeriu Codreanu; Po Yang; Baoquan Liu; Feng Dong; Burhan Yasar; Babak Mahdian; Alessandro Chiarini; Xia Zhao; Jos B. T. M. Roerdink

In this paper we evaluate the performance of the OpenACC and Mint toolkits against C and CUDA implementations of the standard PolyBench test suite. Our analysis reveals that performance is similar in many cases, but that a certain set of code constructs impede the ability of Mint to generate optimal code. We then present some small improvements which we integrate into our own GPSME toolkit (which is derived from Mint) and show that our toolkit now out-performs OpenACC in the majority of tests.


2013 International Conference on Computer Medical Applications (ICCMA) | 2013

Accelerating colonic polyp detection using commodity graphics hardware

David Williams; Valeriu Codreanu; Jos B. T. M. Roerdink; Po Yang; Baoquan Liu; Feng Dong; Alessandro Chiarini

We present a parallel implementation of an algorithm for the detection of colonic polyps from CT data sets. This implementation is designed specifically to take advantage of the computational power available on modern Graphics Processing Units (GPUs), which significantly reduces the execution time to streamline the workflow of clinicians examining the data. We provide details about the changes which were made to the existing algorithm to suit the new target hardware, and perform tests which demonstrate that the results are a very close match to the reference implementation while being computed in a fraction of the time.


Concurrency and Computation: Practice and Experience | 2016

Evaluating automatically parallelized versions of the support vector machine

Valeriu Codreanu; Bob Dröge; David Williams; Burhan Yasar; Po Yang; Baoquan Liu; Feng Dong; Olarik Surinta; Lambert Schomaker; Jos B. T. M. Roerdink; Marco Wiering

The support vector machine (SVM) is a supervised learning algorithm used for recognizing patterns in data. It is a very popular technique in machine learning and has been successfully used in applications such as image classification, protein classification, and handwriting recognition. However, the computational complexity of the kernelized version of the algorithm grows quadratically with the number of training examples. To tackle this high computational complexity, we have developed a directive‐based approach that converts a gradient‐ascent based training algorithm for the CPU to an efficient graphics processing unit (GPU) implementation. We compare our GPU‐based SVM training algorithm to the standard LibSVM CPU implementation, a highly optimized GPU‐LibSVM implementation, as well as to a directive‐based OpenACC implementation. The results on different handwritten digit classification datasets demonstrate an important speed‐up for the current approach when compared to the CPU and OpenACC versions. Furthermore, our solution is almost as fast and sometimes even faster than the highly optimized CUBLAS‐based GPU‐LibSVM implementation, without sacrificing the algorithms accuracy. Copyright


human-computer interaction with mobile devices and services | 2006

Collaborative mobile user interface design: how should companies design the mobile UI together?

David Williams

The panel will discuss the challenges, real-world examples and future directions in the collaborative design of mobile service, devices and application user interfaces.


IEEE Transactions on Industrial Informatics | 2018

Improving Utility of GPU in Accelerating Industrial Applications With User-Centered Automatic Code Translation

Po Yang; Feng Dong; Valeriu Codreanu; David Williams; Jos B. T. M. Roerdink; Baoquan Liu; Amjad Anvari-Moghaddam; Geyong Min

Small to medium enterprises (SMEs), particularly those whose business is focused on developing innovative produces, are limited by a major bottleneck in the speed of computation in many applications. The recent developments in GPUs have been the marked increase in their versatility in many computational areas. But due to the lack of specialist GPUprogramming skills, the explosion of GPU power has not been fully utilized in general SME applications by inexperienced users. Also, the existing automatic CPU-to-GPU code translators are mainly designed for research purposes with poor user interface design and are hard to use. Little attentions have been paid to the applicability, usability, and learnability of these tools for normal users. In this paper, we present an online automated CPU-to-GPU source translation system (GPSME) for inexperienced users to utilize the GPU capability in accelerating general SME applications. This system designs and implements a directive programming model with a new kernel generation scheme and memory management hierarchy to optimize its performance. A web service interface is designed for inexperienced users to easily and flexibly invoke the automatic resource translator. Our experiments with nonexpert GPU users in four SMEs reflect that a GPSME system can efficiently accelerate real-world applications with at least 4× and have a better applicability, usability, and learnability than the existing automatic CPU-to-GPU source translators.

Collaboration


Dive into the David Williams's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Baoquan Liu

University of Bedfordshire

View shared research outputs
Top Co-Authors

Avatar

Feng Dong

University of Bedfordshire

View shared research outputs
Top Co-Authors

Avatar

Po Yang

Liverpool John Moores University

View shared research outputs
Top Co-Authors

Avatar

Xia Zhao

University of Bedfordshire

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bob Dröge

University of Groningen

View shared research outputs
Top Co-Authors

Avatar

E Valentijn

Kapteyn Astronomical Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge