Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Duhgoon Lee is active.

Publication


Featured researches published by Duhgoon Lee.


Physics in Medicine and Biology | 2012

Automatic registration between 3D intra-operative ultrasound and pre-operative CT images of the liver based on robust edge matching

Woo Hyun Nam; Dong-Goo Kang; Duhgoon Lee; Jae Young Lee; Jong Beom Ra

The registration of a three-dimensional (3D) ultrasound (US) image with a computed tomography (CT) or magnetic resonance image is beneficial in various clinical applications such as diagnosis and image-guided intervention of the liver. However, conventional methods usually require a time-consuming and inconvenient manual process for pre-alignment, and the success of this process strongly depends on the proper selection of initial transformation parameters. In this paper, we present an automatic feature-based affine registration procedure of 3D intra-operative US and pre-operative CT images of the liver. In the registration procedure, we first segment vessel lumens and the liver surface from a 3D B-mode US image. We then automatically estimate an initial registration transformation by using the proposed edge matching algorithm. The algorithm finds the most likely correspondences between the vessel centerlines of both images in a non-iterative manner based on a modified Viterbi algorithm. Finally, the registration is iteratively refined on the basis of the global affine transformation by jointly using the vessel and liver surface information. The proposed registration algorithm is validated on synthesized datasets and 20 clinical datasets, through both qualitative and quantitative evaluations. Experimental results show that automatic registration can be successfully achieved between 3D B-mode US and CT images even with a large initial misalignment.


IEEE Signal Processing Letters | 2010

Robust CCD and IR Image Registration Using Gradient-Based Statistical Information

Jae hak Lee; Yong Sun Kim; Duhgoon Lee; Dong-Goo Kang; Jong Beom Ra

This letter presents a robust similarity measure for registering charged-couple device (CCD) and infrared (IR) images. The measure is based on the entropy obtained from a 3-D joint histogram incorporating edginess and modified generalized gradient vector flow (GGVF). To make a reliable mapping relationship between the edge regions of two images, the concept of edginess was adopted so registration performance would be affected mainly by gradient existences rather than their magnitudes. In addition, by adopting the GGVF, we relaxed a narrow capture range problem in conventional gradient-based measures. Experimental results showed that the proposed measure performs more robustly than existing measures.


Physics in Medicine and Biology | 2011

Non-rigid registration between 3D ultrasound and CT images of the liver based on intensity and gradient information

Duhgoon Lee; Woo Hyun Nam; Jae Young Lee; Jong Beom Ra

In order to utilize both ultrasound (US) and computed tomography (CT) images of the liver concurrently for medical applications such as diagnosis and image-guided intervention, non-rigid registration between these two types of images is an essential step, as local deformation between US and CT images exists due to the different respiratory phases involved and due to the probe pressure that occurs in US imaging. This paper introduces a voxel-based non-rigid registration algorithm between the 3D B-mode US and CT images of the liver. In the proposed algorithm, to improve the registration accuracy, we utilize the surface information of the liver and gallbladder in addition to the information of the vessels inside the liver. For an effective correlation between US and CT images, we treat those anatomical regions separately according to their characteristics in US and CT images. Based on a novel objective function using a 3D joint histogram of the intensity and gradient information, vessel-based non-rigid registration is followed by surface-based non-rigid registration in sequence, which improves the registration accuracy. The proposed algorithm is tested for ten clinical datasets and quantitative evaluations are conducted. Experimental results show that the registration error between anatomical features of US and CT images is less than 2 mm on average, even with local deformation due to different respiratory phases and probe pressure. In addition, the lesion registration error is less than 3 mm on average with a maximum of 4.5 mm that is considered acceptable for clinical applications.


Medical Imaging 2006: Ultrasonic Imaging and Signal Processing | 2006

Automatic time gain compensation and dynamic range control in ultrasound imaging systems

Duhgoon Lee; Yong Sun Kim; Jong Beom Ra

For efficient and accurate diagnosis of ultrasound images, the appropriate time gain compensation (TGC) and dynamic range (DR) control of ultrasound echo signals are important. TGC is used for compensating the attenuation of ultrasound echo signals along the depth, and DR is for controlling the image contrast. In recent ultrasound systems, those two factors are automatically set by a system and/or manually adjusted by an operator to obtain the desired image quality on the screen. In this paper, we propose an algorithm to find the optimized parameter values for TGC and DR automatically. In TGC optimization, we determine the degree of attenuation compensation along the depth by reliably estimating the attenuation characteristic of ultrasound signals. For DR optimization, we define a novel cost function by properly using the characteristics of ultrasound images. Experimental results are obtained by applying the proposed algorithm to a real ultrasound (US) imaging system. The results prove that the proposed algorithm automatically sets values of TGC and DR in real-time so that the subjective quality of the enhanced ultrasound images may become good enough for efficient and accurate diagnosis.


Medical Physics | 2014

Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

Chijun Weon; Woo Hyun Nam; Duhgoon Lee; Jae Young Lee; Jong Beom Ra

PURPOSE Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. METHODS The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patients body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a gradient-based similarity measure. Finally, if needed, they obtain the position information of the liver lesion using the 3D preoperative image to which the registered 2D preoperative slice belongs. RESULTS The proposed method was applied to 23 clinical datasets and quantitative evaluations were conducted. With the exception of one clinical dataset that included US images of extremely low quality, 22 datasets of various liver status were successfully applied in the evaluation. Experimental results showed that the registration error between the anatomical features of US and preoperative MR images is less than 3 mm on average. The lesion tracking error was also found to be less than 5 mm at maximum. CONCLUSIONS A new system has been proposed for real-time registration between 2D US and successive multiple 3D preoperative MR/CT images of the liver and was applied for indirect lesion tracking for image-guided intervention. The system is fully automatic and robust even with images that had low quality due to patient status. Through visual examinations and quantitative evaluations, it was verified that the proposed system can provide high lesion tracking accuracy as well as high registration accuracy, at performance levels which were acceptable for various clinical applications.


international symposium on biomedical imaging | 2010

Sensorless and real-time registration between 2D ultrasound and preoperative images of the liver

Duhgoon Lee; Woo Hyun Nam; Donggyu Hyun; Jae Young Lee; Jong Beom Ra

Synchronization between real-time ultrasound (US) and preoperative images can provide much information for US-guided intervention. For the synchronization, we present a real-time registration system between the two images of the liver without any help of sensors. In this system, we first generate a 4D preoperative image, which is composed of multiple 3D images along the respiration, by considering their local deformation. In the intraoperative stage, we achieve the pose information of a pose-fixed 3D US transducer by using several 3D US images. We then acquire 2D US images and find their corresponding images in real-time from the 4D preoperative image. The related registration is done by comparing a gradient-based similarity measure between a 2D US image and generated 2D preoperative image candidates. By the visual assessment of registration results, we confirm the feasibility of the proposed system for image-guidance.


international symposium on biomedical imaging | 2010

Robust registration of 3-D ultrasound and CT images of the liver for image-guided intervention

Woo Hyun Nam; Dong-goo Kang; Duhgoon Lee; Jong Beom Ra

The registration of multi-modal images of the same organ is beneficial in various clinical applications. However, conventional methods usually require a time-consuming and inconvenient manual process for pre-alignment. In this paper, we present an automatic and robust registration algorithm of 3-D B-mode US and CT images of the liver. The proposed algorithm first automatically segments vessels and liver surface from a 3-D B-mode US image by efficiently eliminating unwanted clutters and noise. It then predicts a reliable initial transform parameter set in a non-iterative manner by maximizing the geometric correlation between the skeletons of vessels segmented from both images. Finally, the algorithm refines the obtained parameters iteratively to find the optimal affine transform parameters by jointly using vessel and liver surface information. Experimental results for 20 clinical datasets show that the proposed algorithm successfully registers a 3-D B-mode US image to its corresponding 3-D CT image even with a large misalignment.


Journal of Instrumentation | 2014

Fast signal transfer in a large-area X-ray CMOS image sensor

Myung-Ki Kim; D. Kang; Duhgoon Lee; Hyun-Sik Kim; Gyuseong Cho; M Jae

For 2-d X-ray imaging, such as mammography and non-destructive test, a sensor should have a large-area because the sensor for typical X-ray beams cannot use optical lens system. To make a large-area 2-d X-ray image sensor using crystal Si, a technique of tiling unit CMOS image sensors into 2 × 2 or 2 × 3 array can be used. In a unit CMOS image sensor made of most common 8-inch Si wafers, the signal line can be up to ~ 180 mm long. Then its parasitic capacitance is up to ~ 25 pF and its resistance is up to ~ 51 kΩ (0.18 μm, 1P3M process). This long signal line may enlarge the row time up to ~ 50 μsec in case of the signal from the top row pixels to the readout amplifiers located at the bottom of the sensor chip. The output signal pulse is typically characterized by three components in sequence; a charging time (a rising part), a reading time and a discharging time (a falling part). Among these, the discharging time is the longest, and it limits the speed or the frame rate of the X-ray imager. We proposed a forced discharging method which uses a bypass transistor in parallel with the current source of the column signal line. A chip for testing the idea was fabricated by a 0.18 μm process. A active pixel sensor with three transistors and a 3-π RC model of the long line were simulated together. The test results showed that the turning on-and-off of the proposed bypass transistor only during the discharging time could dramatically reduce the discharging time from ~ 50 μsec to ~ 2 μsec, which is the physically minimum time determined by the long metal line capacitance.


Journal of Instrumentation | 2017

An innovative method to reduce count loss from pulse pile-up in a photon-counting pixel for high flux X-ray applications

Duhgoon Lee; Kyoohyun Lim; Kyungjin Park; Changyeop Lee; S. Alexander; Gyuseong Cho

In this study, an innovative fast X-ray photon-counting pixel for high X-ray flux applications is proposed. A computed tomography system typically uses X-ray fluxes up to 108 photons/mm2/sec at the detector and thus a fast read-out is required in order to process individual X-ray photons. Otherwise, pulse pile-up can occur at the output of the signal processing unit. These superimposed signals can distort the number of incident X-ray photons leading to count loss. To minimize such losses, a cross detection method was implemented in the photon-counting pixel. A maximum count rate under X-ray tube voltage of 90 kV was acquired which reflect electrical test results of the proposed photon counting pixel. A maximum count of 780 kcps was achieved with a conventional photon-counting pixel at the pulse processing time of 500 ns, which is the time for a pulse to return to the baseline from the initial rise. In contrast, the maximum count of about 8.1 Mcps was achieved with the proposed photon-counting pixel. From these results, it was clear that the maximum count rate was increased by approximately a factor 10 times by adopting the cross detection method. Therefore, it is an innovative method to reduce count loss from pulse pile-up in a photon-counting pixel while maintaining the pulse processing time.


Journal of Instrumentation | 2017

Development of P-on-N silicon photomultiplier prototype for blue light detection

Kyung Taek Lim; Duhgoon Lee; Kyeo-reh Park; Giyoon Kim; Min Goo Lee; Yun Ho Kim; Myung-Suk Kim; Ju-Yeop Kim; H. Kim; Eunjoong Lee; Woo-Suk Sul; Gyuseong Cho

In this paper, we report a preliminary study on the electrical and optical properties of the first P-on-N SiPM prototype developed at KAIST with a collaboration of NNFC. The sensors were fabricated on a 200 mm n-type silicon epitaxial-layer wafer via customized CMOS process at NNFC. Measurements on the reverse current were carried out on a wafer-level with an auto-probing station and breakdown voltage was found as 32.3 V. As for optical characterization, gain, dark count rate, and photon detection efficiency have been measured as a function of bias voltage at room temperature. In particular, we show that the device had a comparable gain of ~ 106 with respect to conventional PMTs and had a peak sensitivity in blue light regime. Furthermore, we attempt to explain possible causes of some of phenomena seen from the device characterization.

Collaboration


Dive into the Duhgoon Lee's collaboration.

Researchain Logo
Decentralizing Knowledge