Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrian James Chung is active.

Publication


Featured researches published by Adrian James Chung.


medical image computing and computer assisted intervention | 2007

pq-space based non-photorealistic rendering for augmented reality

Mirna Lerotic; Adrian James Chung; George P. Mylonas; Guang-Zhong Yang

The increasing use of robotic assisted minimally invasive surgery (MIS) provides an ideal environment for using Augmented Reality (AR) for performing image guided surgery. Seamless synthesis of AR depends on a number of factors relating to the way in which virtual objects appear and visually interact with a real environment. Traditional overlaid AR approaches generally suffer from a loss of depth perception. This paper presents a new AR method for robotic assisted MIS, which uses a novel pq-space based non-photorealistic rendering technique for providing see-through vision of the embedded virtual object whilst maintaining salient anatomical details of the exposed anatomical surface. Experimental results with both phantom and in vivo lung lobectomy data demonstrate the visual realism achieved for the proposed method and its accuracy in providing high fidelity AR depth perception.


IEEE Transactions on Medical Imaging | 2006

Nonrigid 2-D/3-D Registration for Patient Specific Bronchoscopy Simulation With Statistical Shape Modeling: Phantom Validation

Adrian James Chung; Guang-Zhong Yang

This paper presents a nonrigid two-dimensional/three-dimensional (2-D/3-D) registration framework and its phantom validation for subject-specific bronchoscope simulation. The method exploits the recent development of five degrees-of-freedom miniaturized catheter tip electromagnetic trackers such that the position and orientation of the bronchoscope can be accurately determined. This allows the effective recovery of unknown camera rotation and airway deformation, which is modelled by an active shape model (ASM). ASM captures the intrinsic variability of the tracheo-bronchial tree during breathing and it is specific to the class of motion it represents. The method reduces the number of parameters that control the deformation, and thus greatly simplifies the optimisation procedure. Subsequently, pq-based registration is performed to recover both the camera pose and parameters of the ASM. Detailed assessment of the algorithm is performed on a deformable airway phantom, with the ground truth data being provided by an additional six degrees-of-freedom electromagnetic (EM) tracker to monitor the level of simulated respiratory motion


International Workshop on Medical Imaging and Virtual Reality | 2004

Freehand Cocalibration of Optical and Electromagnetic Trackers for Navigated Bronchoscopy

Adrian James Chung; Philip J. Edwards; Guang-Zhong Yang

Recent technical advances in electromagnetic (EM) tracking have facilitated the use of EM sensors in surgical interventions. Due to the susceptibility to distortions of the EM field when placed in close proximity to metallic objects, they require calibration in situ to maintain an acceptable degree of accuracy. In this paper, a freehand method is presented for calibrating electromagnetic position sensors by mapping the coordinate measurements to those from optical trackers. Unlike previous techniques, the proposed method allows for free movement of the calibration object, permitting C2 continuity and interdependence between positional and angular corrections. The proposed method involves calculation of a mapping from {ℝ3, SO(3)} to {ℝ3, SO(3)} with radial basis function interpolation based on a modified distance metric. The system provides efficient distortion correction of the EM field, and is applicable to clinical situations where a rapid calibration of EM tracking is required.


international conference on medical imaging and augmented reality | 2008

Perceptual Docking for Robotic Control

Guang-Zhong Yang; George P. Mylonas; Ka-Wai Kwok; Adrian James Chung

In current robotic surgery, dexterity is enhanced by microprocessor controlled mechanical wrists which allow motion scaling for reduced gross hand movements and improved performance of micro-scale tasks. The continuing evolution of the technology, including force feedback and virtual immobilization through real-time motion adaptation, will permit complex procedures such as beating heart surgery to be carried out under a static frame-of-reference. In pursuing more adaptive and intelligent robotic designs, the regulatory, ethical and legal barriers imposed on interventional surgical robots have given rise to the need of a tightly integrated control between the operator and the robot when autonomy is considered. This paper outlines the general concept of perceptual dockingfor robotic control and how it can be used for learning and knowledge acquisition in robotic assisted minimally invasive surgery such that operator specific motor and perceptual/cognitive behaviour is acquired through in situsensing. A gaze contingent framework is presented in this paper as an example to illustrate how saccadic eye movements and ocular vergence can be used for attention selection, recovering 3D tissue deformation and motor channelling during minimally invasive surgical procedures.


Computer Aided Surgery | 2004

Patient-specific bronchoscope simulation with pq-space-based 2D/3D registration

Adrian James Chung; Guang-Zhong Yang

Objective: The use of patient-specific models for surgical simulation requires photorealistic rendering of 3D structure and surface properties. For bronchoscope simulation, this requires augmenting virtual bronchoscope views generated from 3D tomographic data with patient-specific bronchoscope videos. To facilitate matching of video images to the geometry extracted from 3D tomographic data, this paper presents a new pq-space-based 2D/3D registration method for camera pose estimation in bronchoscope tracking. Methods: The proposed technique involves the extraction of surface normals for each pixel of the video images by using a linear local shape-from-shading algorithm derived from the unique camera/lighting constraints of the endoscopes. The resultant pq-vectors are then matched to those of the 3D model by differentiation of the z-buffer. A similarity measure based on angular deviations of the pq-vectors is used to provide a robust 2D/3D registration framework. Localization of tissue deformation is considered by assessing the temporal variation of the pq-vectors between subsequent frames. Results: The accuracy of the proposed method was assessed by using an electromagnetic tracker and a specially constructed airway phantom. Preliminary in vivo validation of the proposed method was performed on a matched patient bronchoscope video sequence and 3D CT data. Comparison to existing intensity-based techniques was also made. Conclusion: The proposed method does not involve explicit feature extraction and is relatively immune to illumination changes. The temporal variation of the pq distribution also permits the identification of localized deformation, which offers an effective way of excluding such areas from the registration process.


medical image computing and computer assisted intervention | 2008

Dynamic View Expansion for Enhanced Navigation in Natural Orifice Transluminal Endoscopic Surgery

Mirna Lerotic; Adrian James Chung; James Clark; Salman Valibeik; Guang-Zhong Yang

Natural Orifice Transluminal Endoscopic Surgery (NOTES) is an emerging surgical technique with increasing global interest. It has recently transcended the boundaries of clinical experiments towards initial clinical evaluation. Although profound benefits to the patient have been demonstrated, NOTES requires highly skilled endoscopists for it to be performed safely and successfully. This predominantly reflects the skill required to navigate a flexible endoscope through a spatially complex environment. This paper presents a method to extend the visual field of the surgeon without compromising on the safety of the patient. The proposed dynamic view expansion uses a novel parallax correction scheme to provide enhanced visual cues that aid the navigation and orientation during NOTES surgery in periphery, while leaving the focal view undisturbed. The method was validated using a natural orifice simulated surgical environment and demonstrated on in vivo porcine data.


IEEE Transactions on Medical Imaging | 2006

Patient-specific bronchoscopy visualization through BRDF estimation and disocclusion correction

Adrian James Chung; Pallav L. Shah; Athol U. Wells; Guang-Zhong Yang

This paper presents an image-based method for virtual bronchoscope with photo-realistic rendering. The technique is based on recovering bidirectional reflectance distribution function (BRDF) parameters in an environment where the choice of viewing positions, directions, and illumination conditions are restricted. Video images of bronchoscopy examinations are combined with patient-specific three-dimensional (3-D) computed tomography data through two-dimensional (2-D)/3-D registration and shading model parameters are then recovered by exploiting the restricted lighting configurations imposed by the bronchoscope. With the proposed technique, the recovered BRDF is used to predict the expected shading intensity, allowing a texture map independent of lighting conditions to be extracted from each video frame. To correct for disocclusion artefacts, statistical texture synthesis was used to recreate the missing areas. New views not present in the original bronchoscopy video are rendered by evaluating the BRDF with different viewing and illumination parameters. This allows free navigation of the acquired 3-D model with enhanced photo-realism. To assess the practical value of the proposed technique, a detailed visual scoring that involves both real and rendered bronchoscope images is conducted.


IEEE\/OSA Journal of Display Technology | 2008

Intra-Operative Visualizations: Perceptual Fidelity and Human Factors

Danail Stoyanov; George P. Mylonas; Mirna Lerotic; Adrian James Chung; Guang-Zhong Yang

With increasing capability and complexity of surgical interventions, intra-operative visualization is becoming an important part of a surgical environment. This paper reviews some of our recent progress in the intelligent use of pre- and intra-operative data for enhanced surgical navigation and motion compensated visualization. High fidelity augmented reality (AR) with enhanced 3D depth perception is proposed to provide effective surgical guidance. To cater for large scale tissue deformation, real-time depth recovery based on stereo disparity and eye gaze tracking is introduced. This allows the development of motion compensated visualization for improved visual perception and for facilitating motion adaptive AR displays. The discussion of the paper is focused on how to ensure perceptual fidelity of AR and the need for real-time tissue deformation recovery and modeling, as well as the importance of incorporating human perceptual factors in surgical displays.


medical image computing and computer assisted intervention | 2003

pq-Space Based 2D/3D Registration for Endoscope Tracking

Adrian James Chung; Guang-Zhong Yang

This paper presents a new pq-space based 2D/3D registration method for camera pose estimation for endoscope tracking. The proposed technique involves the extraction of surface normals for each pixel of the video images by using a linear local shape-from-shading algorithm derived from the unique camera/lighting constrains of the endoscopes. We illustrate how to use the derived pq-space distribution to match to that of the 3D tomographic model, and demonstrate the accuracy of the proposed method by using an electro-magnetic tracker and a specially constructed airway phantom. Comparison to existing intensity-based techniques has also been made, which highlights the major strength of the proposed method in its robustness against illumination and tissue deformation.


medical image computing and computer assisted intervention | 2005

Predictive camera tracking for bronchoscope simulation with CONDensation

Adrian James Chung; Guang Zhong

This paper exploits the use of temporal information to minimize the ambiguity of camera motion tracking in bronchoscope simulation. The condensation algorithm (Sequential Monte Carlo) has been used to propagate the probability distribution of the state space. For motion prediction, a second-order auto-regressive model has been used to characterize camera motion in a bounded lumen as encountered in bronchoscope examination. The method caters for multimodal probability distributions, and experimental results from both phantom and patient data demonstrate a significant improvement in tracking accuracy especially in cases where there is airway deformation and image artefacts.

Collaboration


Dive into the Adrian James Chung's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benny Lo

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Danail Stoyanov

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Athol U. Wells

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Ara Darzi

Imperial College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge