Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoon-Chul Kim is active.

Publication


Featured researches published by Yoon-Chul Kim.


Magnetic Resonance in Medicine | 2009

Accelerated Three-Dimensional Upper Airway MRI Using Compressed Sensing

Yoon-Chul Kim; Shrikanth Narayanan; Krishna S. Nayak

In speech‐production research, three‐dimensional (3D) MRI of the upper airway has provided insights into vocal tract shaping and data for its modeling. Small movements of articulators can lead to large changes in the produced sound, therefore improving the resolution of these data sets, within the constraints of a sustained speech sound (6–12 s), is an important area for investigation. The purpose of the study is to provide a first application of compressed sensing (CS) to high‐resolution 3D upper airway MRI using spatial finite difference as the sparsifying transform, and to experimentally determine the benefit of applying constraints on image phase. Estimates of image phase are incorporated into the CS reconstruction to improve the sparsity of the finite difference of the solution. In a retrospective subsampling experiment with no sound production, 5× and 4× were the highest acceleration factors that produced acceptable image quality when using a phase constraint and when not using a phase constraint, respectively. The prospective use of a 5× undersampled acquisition and phase‐constrained CS reconstruction enabled 3D vocal tract MRI during sustained sound production of English consonants /s/, /∫/, /l/, and /r/ with 1.5 × 1.5 × 2.0 mm3 spatial resolution and 7 s of scan time. Magn Reson Med, 2009.


IEEE Signal Processing Magazine | 2008

Seeing speech: Capturing vocal tract shaping using real-time magnetic resonance imaging [Exploratory DSP]

Erik Bresch; Yoon-Chul Kim; Krishna S. Nayak; Dani Byrd; Shrikanth Narayanan

In this paper real-time (RT) magnetic resonance imaging (MRI) is used to study speech production especially capturing vocal tract shaping.


Magnetic Resonance in Medicine | 2011

Flexible Retrospective Selection of Temporal Resolution in Real-time Speech MRI Using a Golden-Ratio Spiral View Order

Yoon-Chul Kim; Shrikanth Narayanan; Krishna S. Nayak

In speech production research using real‐time magnetic resonance imaging (MRI), the analysis of articulatory dynamics is performed retrospectively. A flexible selection of temporal resolution is highly desirable because of natural variations in speech rate and variations in the speed of different articulators. The purpose of the study is to demonstrate a first application of golden‐ratio spiral temporal view order to real‐time speech MRI and investigate its performance by comparison with conventional bit‐reversed temporal view order. Golden‐ratio view order proved to be more effective at capturing the dynamics of rapid tongue tip motion. A method for automated blockwise selection of temporal resolution is presented that enables the synthesis of a single video from multiple temporal resolution videos and potentially facilitates subsequent vocal tract shape analysis. Magn Reson Med, 2011.


Magnetic Resonance in Medicine | 2014

Real-Time 3D Magnetic Resonance Imaging of the Pharyngeal Airway in Sleep Apnea

Yoon-Chul Kim; R. Marc Lebel; Ziyue Wu; Sally L. Davidson Ward; Michael C. K. Khoo; Krishna S. Nayak

To investigate the feasibility of real‐time 3D magnetic resonance imaging (MRI) with simultaneous recording of physiological signals for identifying sites of airway obstruction during natural sleep in pediatric patients with sleep‐disordered breathing.


Magnetic Resonance in Medicine | 2017

A fast and flexible MRI system for the study of dynamic vocal tract shaping.

Sajan Goud Lingala; Yinghua Zhu; Yoon-Chul Kim; Asterios Toutios; Shrikanth Narayanan; Krishna S. Nayak

The aim of this work was to develop and evaluate an MRI‐based system for study of dynamic vocal tract shaping during speech production, which provides high spatial and temporal resolution.


Archives of Otolaryngology-head & Neck Surgery | 2013

Evaluation of Swallow Function After Tongue Cancer Treatment Using Real-Time Magnetic Resonance Imaging: A Pilot Study

Y. Zu; Shrikanth Narayanan; Yoon-Chul Kim; Krishna S. Nayak; Christina R. Bronson-Lowe; Brenda Villegas; Melody Ouyoung; Uttam K. Sinha

IMPORTANCE Magnetic resonance imaging (MRI) has the advantage of imaging swallow function at any anatomical level without changing the position of patient, which can provide detailed information than modified barium swallow, by far the gold standard of swallow evaluation. OBJECTIVE To investigate the use of real-time MRI in the evaluation of swallow function of patients with tongue cancer. DESIGN, SETTING, AND PARTICIPANTS Real-time MRI experiments were performed on a Signa Excite HD 1.5-T scanner (GE Healthcare), with gradients capable of 40-mT/m (milli-Tesla per meter) amplitudes and 150-mT/m/ms (mT/m per millisecond) slew rates. The sequence used was spiral fast gradient echo sequence. Four men with base of tongue or oral tongue squamous cell carcinoma and 3 age-matched healthy men with normal swallowing participated in the experiment. INTERVENTIONS Real-time MRI of the midsagittal plane was collected during swallowing. Coronal planes between the oral tongue and base of tongue and through the middle of the larynx were collected from 1 of the patients. MAIN OUTCOMES AND MEASURES Oral transit time, pharyngeal transit time, submental muscle length change, and the distance change between the hyoid bone and anterior boundary of the thyroid cartilage were measured frame by frame during swallowing. RESULTS All the measurable oral transit and pharyngeal transit times of the patients with cancer were significantly longer than the ones of the healthy participants. The changes in submental muscle length and the distance between the hyoid bone and thyroid cartilage happened in concert for all 60 normal swallows; however, the pattern differed for each patient with cancer. To our knowledge, the coronal view of the tongue and larynx revealed information that has not been previously reported. CONCLUSIONS AND RELEVANCE This study has demonstrated the potential of real-time MRI to reveal critical information beyond the capacity of traditional videofluoroscopy. Further investigation is needed to fully consider the technique, procedure, and standard scope of applying MRI to evaluate swallow function of patients with cancer in research and clinic practice.


IEEE Transactions on Biomedical Engineering | 2016

Dynamic 3-D MR Visualization and Detection of Upper Airway Obstruction During Sleep Using Region-Growing Segmentation

Ahsan Javed; Yoon-Chul Kim; Michael C. K. Khoo; Sally L. Davidson Ward; Krishna S. Nayak

Goal: We demonstrate a novel and robust approach for visualization of upper airway dynamics and detection of obstructive events from dynamic 3-D magnetic resonance imaging (MRI) scans of the pharyngeal airway. Methods: This approach uses 3-D region growing, where the operator selects a region of interest that includes the pharyngeal airway, places two seeds in the patent airway, and determines a threshold for the first frame. Results: This approach required 5 s/frame of CPU time compared to 10 min/frame of operator time for manual segmentation. It compared well with manual segmentation, resulting in Dice Coefficients of 0.84 to 0.94, whereas the Dice Coefficients for two manual segmentations by the same observer were 0.89 to 0.97. It was also able to automatically detect 83% of collapse events. Conclusion: Use of this simple semiautomated segmentation approach improves the workflow of novel dynamic MRI studies of the pharyngeal airway and enables visualization and detection of obstructive events. Significance: Obstructive sleep apnea (OSA) is a significant public health issue affecting 4-9% of adults and 2% of children. Recently, 3-D dynamic MRI of the upper airway has been demonstrated during natural sleep, with sufficient spatiotemporal resolution to noninvasively study patterns of airway obstruction in young adults with OSA. This study makes it practical to analyze these long scans and visualize important factors in an MRI sleep study, such as the time, site, and extent of airway collapse.


international conference on acoustics, speech, and signal processing | 2009

Accelerated 3D MRI of vocal tract shaping using compressed sensing and parallel imaging

Yoon-Chul Kim; Shrikanth Narayanan; Krishna S. Nayak

3D MRI of the upper airway has provided valuable insights into vocal tract shaping and data for the modeling of speech production. Small movements of articulators can lead to large changes in the produced sound, therefore improving the resolution of these datasets, within the constraints of a sustained sound (6–12 seconds), is an important area for investigation. This paper provides the first application of compressed sensing (CS) with parallel imaging to high-resolution 3D upper airway MRI. We use spatial finite difference as the sparsifying transform, and investigate the use of high-resolution phase information as a constraint during CS reconstruction. In a retrospective subsampling experiment with no sound production, 5x undersampling produced acceptable image quality when using phase-constrained CS reconstruction. The prospective use of this accelerated acquisition enabled 3D vocal-tract MRI during sustained production of English /s/,/∫/,/i/,/r/ with 1.33×1.33×1.33-mm3 spatial resolution and 10-seconds of scan time.


Magnetic Resonance in Medicine | 2014

Evaluation of an independent linear model for acoustic noise on a conventional MRI scanner and implications for acoustic noise reduction.

Ziyue Wu; Yoon-Chul Kim; Michael C. K. Khoo; Krishna S. Nayak

To evaluate an independent linear model for gradient acoustic noise on a conventional MRI scanner, and to explore implications for acoustic noise reduction in routine imaging.


IEEE Transactions on Nuclear Science | 2009

Concurrent Segmentation and Estimation of Transmission Images for Attenuation Correction in Positron Emission Tomography

John M. M. Anderson; Yoon-Chul Kim; John R. Votaw

When transmission images are obtained using conventional reconstruction methods in stand alone PET scanners, such as standard clinical PET, microPET, and dedicated brain scanners, the results may be noisy and/or inaccurate. For example, the popular penalized maximum-likelihood method effectively reduces noise, but it does not address the bias problem that results from the incorporation of a penalty function and contamination from emission data due to patient activity. In this paper, we present an algorithm that simultaneously reconstructs transmission images and performs a ldquosoftrdquo segmentation of voxels into the classes: air, patient bed, lung, soft-tissue, and bone. It is through the segmentation step that the algorithm, which we refer to as the concurrent segmentation and estimation (CSE) algorithm, provides a means for incorporating accurate attenuation coefficients. The CSE algorithm is obtained by applying an expectation-maximization like formulation to a certain maximum a posterior objective function. This formulation enables us to show that the CSE algorithm monotonically increases the objective function. In experiments using real phantom and synthetic data, the CSE images produced attenuation correction factors and emission images that were more accurate than those obtained using a popular segmentation based attenuation correction method, and the penalized maximum likelihood and filtered backprojection methods.

Collaboration


Dive into the Yoon-Chul Kim's collaboration.

Top Co-Authors

Avatar

Krishna S. Nayak

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Shrikanth Narayanan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Asterios Toutios

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Dani Byrd

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yinghua Zhu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jangwon Kim

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Louis Goldstein

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Michael C. K. Khoo

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Sungbok Lee

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge