Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mi-Sun Kang is active.

Publication


Featured researches published by Mi-Sun Kang.


Proceedings of SPIE | 2012

Cell morphology classification in phase contrast microscopy image reducing halo artifact

Mi-Sun Kang; Soo-Min Song; Hana Lee; Myoung-Hee Kim

Since the morphology of tumor cells is a good indicator of their invasiveness, we used time-lapse phase-contrast microscopy to examine the morphology of tumor cells. This technique enables long-term observation of the activity of live cells without photobleaching and phototoxicity which is common in other fluorescence-labeled microscopy. However, it does have certain drawbacks in terms of imaging. Therefore, we first corrected for non-uniform illumination artifacts and then we use intensity distribution information to detect cell boundary. In phase contrast microscopy image, cell is normally appeared as dark region surrounded by bright halo ring. Due to halo artifact is minimal around the cell body and has non-symmetric diffusion pattern, we calculate cross sectional plane which intersects center of each cell and orthogonal to first principal axis. Then, we extract dark cell region by analyzing intensity profile curve considering local bright peak as halo area. Finally, we examined cell morphology to classify tumor cells as malignant and benign.


international symposium on visual computing | 2014

Non-rigid Registration of Vascular Structures for Aligning 2D X-ray Angiography with 3D CT Angiography

Hye-Ryun Kim; Mi-Sun Kang; Myoung-Hee Kim

The alignment of pre-operative 3D scans with intra-operative 2D images is important for providing better image guidance. Specifically, overlaying the 3D centerlines of coronary arteries on top of X-ray angiography images reduces the uncertainty inherent in 2D images used during cardiovascular interventions. Because of the dynamic cardiovascular motion from the heartbeat and respiration, a non-rigid registration approach should be applied in contrast registration of the static vascular structure. In this paper, a modified TPS-RPM method is adopted as a non-rigid registration based on a feature-based approach. The proposed method is evaluated on 12 clinical datasets to highlight the necessity of a non-rigid registration approach.


SLAS DISCOVERY: Advancing Life Sciences R&D | 2017

High-Throughput Clonogenic Analysis of 3D-Cultured Patient-Derived Cells with a Micropillar and Microwell Chip

Dong Woo Lee; Sang-Yun Lee; Lily Park; Mi-Sun Kang; Myoung-Hee Kim; Il Doh; Gyu Ha Ryu; Do-Hyun Nam

A high-throughput clonogenic assay with a micropillar–microwell chip platform is proposed by using the colony area of glioblastoma multiforme (GBM) patient-derived cells (PDCs) from colony images. Unlike conventional cell lines, PDCs from the tumor are composed of heterogeneous cell populations, and some clonogenic populations form colonies during culture while the rest die off or remain unchanged, thus causing the diverse distribution of colony size. Therefore, area-based analysis of the total colonies is not sufficient to estimate total cell viability or toxicity responses. In this work, the average and standard deviation of an individual colony’s area calculated from the colony images were used as indicators for cell clonogenicity and heterogeneity, respectively. Two parameters (the total and average area of colonies) were compared to draw the colony’s growth curve and measure a doubling time and dose–response curve (IC50). Based on both analyses of two PDCs, 464T PDCs show a higher heterogeneity and clonogenicity than 448T PDCs. The differences in the doubling time and the IC50 according to the analysis methods suggest that the average area of colonies, rather than their total area, is suitable for heterogeneous and clonogenic samples.


Journal of KIISE | 2016

Feature-based Gene Classification and Region Clustering using Gene Expression Grid Data in Mouse Hippocampal Region

Mi-Sun Kang; Hye-Ryun Kim; Sukchan Lee; Myoung-Hee Kim

Brain gene expression information is closely related to the structural and functional characteristics of the brain. Thus, extensive research has been carried out on the relationship between gene expression patterns and the brains structural organization. In this study, Principal Component Analysis was used to extract features of gene expression patterns, and genes were automatically classified by spatial distribution. Voxels were then clustered with classified specific region expressed genes. Finally, we visualized the clustering results for mouse hippocampal region gene expression with the Allen Brain Atlas. This experiment allowed us to classify the region-specific gene expression of the mouse hippocampal region and provided visualization of clustering results and a brain atlas in an integrated manner. This study has the potential to allow neuroscientists to search for experimental groups of genes more quickly and design an effective test according to the new form of data. It is also expected that it will enable the discovery of a more specific sub-region beyond the current known anatomical regions of the brain.


Proceedings of SPIE | 2013

Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU

Mi-Sun Kang; Jeong-Eom Lee; Woong-ki Jeon; Heung-Kook Choi; Myoung-Hee Kim

3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.


Journal of Biomedical Engineering Research | 2012

Classification of Tumor cells in Phase-contrast Microscopy Image using Fourier Descriptor

Mi-Sun Kang; Jeong-Eom Lee; Hye-Ryun Kim; Myoung-Hee Kim

Tumor cell morphology is closely related to its migratory behaviors. An active tumor cell has a highly irregular shape, whereas a spherical cell is inactive. Thus, quantitative analysis of cell features is crucial to determine tumor malignancy or to test the efficacy of anticancer treatment. We use 3D time-lapse phase-contrast microscopy to analyze single cell morphology because it enables to observe long-term activity of living cells without photobleaching and phototoxicity, which is common in other fluorescence-labeled microscopy. Despite this advantage, there are image-level drawbacks to phase-contrast microscopy, such as local light effect and contrast interference ring. Therefore, we first corrected for non-uniform illumination artifacts and then we use intensity distribution information to detect cell boundary. In phase contrast microscopy image, cell is normally appeared as dark region surrounded by bright halo ring. Due to halo artifact is minimal around the cell body and has non-symmetric diffusion pattern, we calculate cross sectional plane which intersects center of each cell and orthogonal to first principal axis. Then, we extract dark cell region by analyzing intensity profile curve considering local bright peak as halo area. Finally, we calculated the Fourier descriptor that morphological characteristics of cell to classify tumor cells into active and inactive groups. We validated classification accuracy by comparing our findings with manually obtained results.


medical image computing and computer-assisted intervention | 2018

DeepHCS: Bright-Field to Fluorescence Microscopy Image Conversion Using Deep Learning for Label-Free High-Content Screening

Gyuhyun Lee; Jeong-Woo Oh; Mi-Sun Kang; Nam-Gu Her; Myoung-Hee Kim; Won-Ki Jeong

In this paper, we propose a novel image processing method, DeepHCS, to transform bright-field microscopy images into synthetic fluorescence images of cell nuclei biomarkers commonly used in high-content drug screening. The main motivation of the proposed work is to automatically generate virtual biomarker images from conventional bright-field images, which can greatly reduce time-consuming and laborious tissue preparation efforts and improve the throughput of the screening process. DeepHCS uses bright-field images and their corresponding cell nuclei staining (DAPI) fluorescence images as a set of image pairs to train a series of end-to-end deep convolutional neural networks. By leveraging a state-of-the-art deep learning method, the proposed method can produce synthetic fluorescence images comparable to real DAPI images with high accuracy. We demonstrate the efficacy of this method using a real glioblastoma drug screening dataset with various quality metrics, including PSNR, SSIM, cell viability correlation (CVC), the area under the curve (AUC), and the IC50.


Proceedings of SPIE | 2017

Quantification of patient-derived 3D cancer spheroids in high-content screening images

Mi-Sun Kang; Seon-Min Rhee; Ji-Hyun Seo; Myoung-Hee Kim

We present a cell image quantification method for image-based drug response prediction from patient-derived glioblastoma cells. Drug response of each person differs at the cellular level. Therefore, quantification of a patient-derived cell phenotype is important in drug response prediction. We performed fluorescence microscopy to understand the features of patient-derived 3D cancer spheroids. A 3D cell culture simulates the in-vivo environment more closely than 2D adherence culture, and thus, allows more accurate cell analysis. Furthermore, it allows assessment of cellular aggregates. Cohesion is an important feature of cancer cells. In this paper, we demonstrate image-based quantification of cellular area, fluorescence intensity, and cohesion. To this end, we first performed image stitching to create an image of each well of the plate with the same environment. This image shows colonies of various sizes and shapes. To automatically detect the colonies, we used an intensity based classification algorithm. The morphological features of each cancer cell colony were measured. Next, we calculated the location correlation of each colony that is appeal of the cell density in the same well environment. Finally, we compared the features for drug-treated and untreated cells. This technique could potentially be applied for drug screening and quantification of the effects of the drugs.


Proceedings of SPIE | 2017

Phenotypic feature quantification of patient derived 3D cancer spheroids in fluorescence microscopy image

Mi-Sun Kang; Seon-Min Rhee; Ji-Hyun Seo; Myoung-Hee Kim

Patients’ responses to a drug differ at the cellular level. Here, we present an image-based cell phenotypic feature quantification method for predicting the responses of patient-derived glioblastoma cells to a particular drug. We used high-content imaging to understand the features of patient-derived cancer cells. A 3D spheroid culture formation resembles the in vivo environment more closely than 2D adherent cultures do, and it allows for the observation of cellular aggregate characteristics. However, cell analysis at the individual level is more challenging. In this paper, we demonstrate image-based phenotypic screening of the nuclei of patient-derived cancer cells. We first stitched the images of each well of the 384-well plate with the same state. We then used intensity information to detect the colonies. The nuclear intensity and morphological characteristics were used for the segmentation of individual nuclei. Next, we calculated the position of each nucleus that is appeal of the spatial pattern of cells in the well environment. Finally, we compared the results obtained using 3D spheroid culture cells with those obtained using 2D adherent culture cells from the same patient being treated with the same drugs. This technique could be applied for image-based phenotypic screening of cells to determine the patient’s response to the drug.


Proceedings of SPIE | 2016

Analysis of cancer cell morphology in fluorescence microscopy image exploiting shape descriptor

Mi-Sun Kang; Hye-Ryun Kim; Sudong Kim; Gyu Ha Ryu; Myoung-Hee Kim

Cancer cell morphology is closely related to their phenotype and activity. These characteristics are important in drug-response prediction for personalized cancer therapeutics. We used multi-channel fluorescence microscopy images to analyze the morphology of highly cohesive cancer cells. First, we detected individual nuclei regions in single-channel images using advanced simple linear iterative clustering. The center points of the nuclei regions were used as seeds for the Voronoi diagram method to extract spatial arrangement features from cell images. Human cancer cell populations form irregularly shaped aggregates, making their detection more difficult. We overcame this problem by identifying individual cells using an image-based shape descriptor. Finally, we analyzed the correlation between cell agglutination and cell shape.

Collaboration


Dive into the Mi-Sun Kang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gyu Ha Ryu

Sungkyunkwan University

View shared research outputs
Top Co-Authors

Avatar

Hana Lee

Ewha Womans University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ji-Hyun Seo

Ewha Womans University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge