Paul Y. S. Cheung
University of Hong Kong
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul Y. S. Cheung.
IEEE Transactions on Biomedical Engineering | 2008
K. W. Sum; Paul Y. S. Cheung
Vessel extraction is one of the critical tasks in clinical practice. This communication presents a new approach for vessel extraction using a level-set-based active contour by defining a novel local term that takes local image contrast into account. The proposed model not only preserves the performance of the existing models on blurry images, but also overcomes their inability to handle nonuniform illumination. The efficacy of the approach is demonstrated with experiments involving both synthetic images and clinical angiograms.
Pattern Recognition | 2007
K. W. Sum; Paul Y. S. Cheung
Snakes are widely used in the fields of computer vision and image processing. There are a number of shortcomings of the traditional snakes, including the capture range problem and the concave object extraction problem. Gradient vector flow snake enhances the concave object extraction capability. However, it suffers from high computational requirement. In this paper, we present a new external force which is generated by a novel interpolation scheme which reduces the computational requirement significantly, and at the same time, improves the capture range and concave object extraction capability.
Optical Engineering | 2001
Hing Yip Chung; Nelson Hon Ching Yung; Paul Y. S. Cheung
This paper presents a new block-based motion estimation al- gorithm that employs motion-vector prediction to locate an initial search point, which is called a search center, and an outward spiral search pattern with motion-vector refinement, to speed up the motion estimation process. It is found that the proposed algorithm is only slightly slower than cross search, but has a peak signal-to-noise ratio (PSNR) very close to that of full search (FS). Our research shows the motion vector of a target block can be predicted from the motion vectors of its neighboring blocks. The predicted motion vector can be used to locate a search center in the search window. This approach has two distinct merits. First, as the search center is closer to the optimum motion vector, the possi- bility of finding it is substantially higher. Second, it takes many less search points to achieve this. Results show that the proposed algorithm can achieve 99.7% to 100% of the average PSNR of FS, while it only requires 1.40% to 4.07% of the computation time of FS. When compared with six other fast motion estimation algorithms, it offers the best trade- off between two objective measures: average PSNR and search time.
IEEE Transactions on Circuits and Systems for Video Technology | 2000
Kwong-Keung Leung; Nelson Hon Ching Yung; Paul Y. S. Cheung
This paper presents a parallelization methodology for video coding based on the philosophy of hiding as much communications by computation as possible. It models the task/data size, processor cache capacity, and communication contention, through a systematic decomposition and scheduling approach. With the aid of Petri-nets and task graphs for representation and analysis, it employs a triple buffering scheme to enable the functions of frame capture, management, and coding to be performed in parallel. The theoretical speedup analysis indicates that this method offers excellent communication hiding, resulting in system efficiency well above 90%. To prove its practicality, a H.261 video encoder has been implemented on a TMS320C80 system using the method. Its performance was measured, from which the speedup and efficiency figures were calculated. The only difference detected between the theoretical and measured data is the program control overhead that has not been accounted for in the theoretical model. Even with this, the measured speedup of the H.261 is 3.67 and 3.76 on four parallel processors (PPs) for QCIF and 352/spl times/240 video, respectively, which correspond to a frame rate of 30.7 and 9.25 frames per second, and system efficiency of 91.8% and 94%, respectively. This method is particularly efficient for platforms with a small number of parallel processors.
international conference on acoustics, speech, and signal processing | 2007
Anthony K. W. Sum; Paul Y. S. Cheung
Anisotropic diffusion is an iterative process which provides efficient signal smoothing with feature preserving capabilities. However, the traditional anisotropic diffusion algorithms are highly sensitive to the number of iterations. In this paper, we introduce a novel method in the diffusion formulations to stabilize the diffusion results. It is generally applicable to most of the anisotropic diffusion algorithms, and the experimental results show that the stabilized algorithms provide improved results.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005
Ronald H. Y. Chung; Nelson Hon Ching Yung; Paul Y. S. Cheung
This paper proposes a general quadrilateral-based framework for image segmentation, in which quadrilaterals are first constructed from an edge map, where neighboring quadrilaterals with similar features of interest are then merged together to form regions. Under the proposed framework, the quadrilaterals enable the elimination of local variations and unnecessary details for merging from which each segmented region is accurately and completely described by a set of quadrilaterals. To illustrate the effectiveness of the proposed framework, we derived an efficient and high-performance parameterless quadrilateral-based segmentation algorithm from the framework. The proposed algorithm shows that the regions obtained under the framework are segmented into multiple levels of quadrilaterals that accurately represent the regions without severely over or undersegmenting them. When evaluated objectively and subjectively, the proposed algorithm performs better than three other segmentation techniques, namely, seeded region growing, k-means clustering and constrained gravitational clustering, and offers an efficient description of the segmented objects conducive to content-based applications.
computer graphics international | 1998
Sam Lin; Rynson W. H. Lau; Xiaola Lin; Paul Y. S. Cheung
We describe a parallel rendering method based on the adaptive supersampling technique to produce anti-aliased images with minimal memory consumption. Unlike traditional supersampling methods, this one does not supersample every pixel, but only those edge pixels. We consider various strategies to reduce the memory consumption in order for the method to be applicable in situations where limited or fixed amount of pre-allocated memory is available. This is a very important issue, especially in parallel rendering. We have implemented our algorithm on a parallel machine based on the message passing model. Towards the end of the paper, we present some experimental results on the memory usage and the performance of the method.
IEEE Transactions on Parallel and Distributed Systems | 2000
Yuzhong Sun; Paul Y. S. Cheung; Xiaola Lin
In this paper, we introduce a family of scalable interconnection network topologies, named Recursive Cube of Rings (RCR), which are recursively constructed by adding ring edges to a cube. RCRs possess many desirable topological properties in building scalable parallel machines, such as fixed degree, small diameter, wide bisection width, symmetry, fault tolerance, etc. We first examine the topological properties of RCRs. We then present and analyze a general deadlock-free routing algorithm for RCRs. Using a complete binary tree embedded into an RCR with expansion-cost approximating to one, an efficient broadcast routing algorithm on RCRs is proposed. The upper bound of the number of message passing steps in one broadcast operation on a general RCR is also derived.
IEEE Transactions on Parallel and Distributed Systems | 2001
Wai-Sum Lin; Rynson W. H. Lau; Kai Hwang; Xiaola Lin; Paul Y. S. Cheung
This paper presents the design and performance of a new parallel graphics renderer for 3D images. This renderer is based on an adaptive supersampling approach that works for time/space-efficient execution on two classes of parallel computers. Our rendering scheme takes subpixel supersamples only along polygon edges. This leads to a significant reduction in rendering time and in buffer memory requirements. Furthermore, we offer a balanced rasterization of all transformed polygons. Experimental results prove these advantages on both a shared-memory SGI multiprocessor server and a Unix cluster of Sun workstations. We reveal performance effects of the new rendering scheme on subpixel resolution, polygon number, scene complexity, and memory requirements. The balanced parallel renderer demonstrates scalable performance with respect to increase in graphic complexity and in machine size. Our parallel renderer outperforms Crows scheme in benchmark experiments performed. The improvements are made in three fronts: (1) reduction in rendering time, (2) higher efficiency with balanced workload,: and (3) adaptive to available buffer memory size. The balanced renderer can be more cost-effectively embedded within many 3D graphics algorithms, such as those for edge smoothing and 3D visualization. Our parallel renderer is MPI-coded, offering high portability and cross-platform performance. These advantages can greatly improve the QoS in 3D imaging and in real-time interactive graphics.
IEEE Transactions on Biomedical Engineering | 2014
X. Kang; Mehran Armand; Yoshito Otake; Wai Pan Yau; Paul Y. S. Cheung; Yong Hu; Russell H. Taylor
2-D-to-3-D registration is critical and fundamental in image-guided interventions. It could be achieved from single image using paired point correspondences between the object and the image. The common assumption that such correspondences can readily be established does not necessarily hold for image guided interventions. Intraoperative image clutter and an imperfect feature extraction method may introduce false detection and, due to the physics of X-ray imaging, the 2-D image point features may be indistinguishable from each other and/or obscured by anatomy causing false detection of the point features. These create difficulties in establishing correspondences between image features and 3-D data points. In this paper, we propose an accurate, robust, and fast method to accomplish 2-D-3-D registration using a single image without the need for establishing paired correspondences in the presence of false detection. We formulate 2-D-3-D registration as a maximum likelihood estimation problem, which is then solved by coupling expectation maximization with particle swarm optimization. The proposed method was evaluated in a phantom and a cadaver study. In the phantom study, it achieved subdegree rotation errors and submillimeter in-plane ( X- Y plane) translation errors. In both studies, it outperformed the state-of-the-art methods that do not use paired correspondences and achieved the same accuracy as a state-of-the-art global optimal method that uses correct paired correspondences.