Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hongshik Ahn is active.

Publication


Featured researches published by Hongshik Ahn.


Journal of Biological Chemistry | 2002

Transcriptional profiling of bone regeneration: Insight into the molecular complexity of wound repair

Michael Hadjiargyrou; Frank Lombardo; Shanchuan Zhao; William Ahrens; Jungnam Joo; Hongshik Ahn; Mark Jurman; David W. White; Clinton T. Rubin

The healing of skeletal fractures is essentially a replay of bone development, involving the closely regulated, interdependent processes of chondrogenesis and osteogenesis. Using a rat femur model of bone healing to determine the degree of transcriptional complexity of these processes, suppressive subtractive hybridization (SSH) was performed between RNA isolated from intact bone to that of callus from post-fracture (PF) days 3, 5, 7, and 10 as a means of identifying up-regulated genes in the regenerative process. Analysis of 3,635 cDNA clones revealed 588 known genes (65.8%, 2392 clones) and 821 expressed sequence tags (ESTs) (31%, 1,127). The remaining 116 cDNAs (3.2%) yielded no homology and presumably represent novel genes. Microarrays were then constructed to confirm induction of expression and determine the temporal profile of all isolated cDNAs during fracture healing. These experiments confirmed that ∼90 and ∼80% of the subtracted known genes and ESTs are up-regulated (≥2.5-fold) during the repair process, respectively. Clustering analysis revealed subsets of genes, both known and unknown, that exhibited distinct expression patterns over 21 days (PF), indicating distinct roles in the healing process. Additionally, this transcriptional profiling of bone repair revealed a host of activated signaling molecules and even pathways (i.e. Wnt). In summary, the data demonstrate, for the fist time, that the healing process is exceedingly complex, involves thousands of activated genes, and indicates that groups of genes rather than individual molecules should be considered if the regeneration of bone is to be accelerated exogenously.


Computational Statistics & Data Analysis | 2007

Classification by ensembles from random partitions of high-dimensional data

Hongshik Ahn; Hojin Moon; Melissa J. Fazzari; Noha Lim; James J. Chen; Ralph L. Kodell

A robust classification procedure is developed based on ensembles of classifiers, with each classifier constructed from a different set of predictors determined by a random partition of the entire set of predictors. The proposed methods combine the results of multiple classifiers to achieve a substantially improved prediction compared to the optimal single classifier. This approach is designed specifically for high-dimensional data sets for which a classifier is sought. By combining classifiers built from each subspace of the predictors, the proposed methods achieve a computational advantage in tackling the growing problem of dimensionality. For each subspace of the predictors, we build a classification tree or logistic regression tree. Our study shows, using four real data sets from different areas, that our methods perform consistently well compared to widely used classification methods. For unbalanced data, our approach maintains the balance between sensitivity and specificity more adequately than many other classification methods considered in this study.


Artificial Intelligence in Medicine | 2007

Ensemble methods for classification of patients for personalized medicine with high-dimensional data

Hojin Moon; Hongshik Ahn; Ralph L. Kodell; Songjoon Baek; Chien-Ju Lin; James J. Chen

OBJECTIVE Personalized medicine is defined by the use of genomic signatures of patients in a target population for assignment of more effective therapies as well as better diagnosis and earlier interventions that might prevent or delay disease. An objective is to find a novel classification algorithm that can be used for prediction of response to therapy in order to help individualize clinical assignment of treatment. METHODS AND MATERIALS Classification algorithms are required to be highly accurate for optimal treatment on each patient. Typically, there are numerous genomic and clinical variables over a relatively small number of patients, which presents challenges for most traditional classification algorithms to avoid over-fitting the data. We developed a robust classification algorithm for high-dimensional data based on ensembles of classifiers built from the optimal number of random partitions of the feature space. The software is available on request from the authors. RESULTS The proposed algorithm is applied to genomic data sets on lymphoma patients and lung cancer patients to distinguish disease subtypes for optimal treatment and to genomic data on breast cancer patients to identify patients most likely to benefit from adjuvant chemotherapy after surgery. The performance of the proposed algorithm is consistently ranked highly compared to the other classification algorithms. CONCLUSION The statistical classification method for individualized treatment of diseases developed in this study is expected to play a critical role in developing safer and more effective therapies that replace one-size-fits-all drugs with treatments that focus on specific patient needs.


Journal of Toxicology and Environmental Health | 2009

Human organ/tissue growth algorithms that include obese individuals and black/white population organ weight similarities from autopsy data.

John F. Young; Richard H. Luecke; Bruce A. Pearce; Taewon Lee; Hongshik Ahn; Songjoon Baek; Hojin Moon; Daniel W. Dye; Thomas M. Davis; Susan J. Taylor

Physiologically based pharmacokinetic (PBPK) models need the correct organ/tissue weights to match various total body weights in order to be applied to children and the obese individual. Baseline data from Reference Man for the growth of human organs (adrenals, brain, heart, kidneys, liver, lungs, pancreas, spleen, thymus, and thyroid) were augmented with autopsy data to extend the describing polynomials to include the morbidly obese individual (up to 250 kg). Additional literature data similarly extends the growth curves for blood volume, muscle, skin, and adipose tissue. Collectively these polynomials were used to calculate blood/organ/tissue weights for males and females from birth to 250 kg, which can be directly used to help parameterize PBPK models. In contrast to other black/white anthropomorphic measurements, the data demonstrated no observable or statistical difference in weights for any organ/tissue between individuals identified as black or white in the autopsy reports.


Artificial Intelligence in Medicine | 2008

A decision support system to facilitate management of patients with acute gastrointestinal bleeding

Adrienne Chu; Hongshik Ahn; Bhawna Halwan; Bruce Kalmin; Everson L. Artifon; Alan N. Barkun; Michail G. Lagoudakis; Atul Kumar

OBJECTIVE To develop a model to predict the bleeding source and identify the cohort amongst patients with acute gastrointestinal bleeding (GIB) who require urgent intervention, including endoscopy. Patients with acute GIB, an unpredictable event, are most commonly evaluated and managed by non-gastroenterologists. Rapid and consistently reliable risk stratification of patients with acute GIB for urgent endoscopy may potentially improve outcomes amongst such patients by targeting scarce healthcare resources to those who need it the most. DESIGN AND METHODS Using ICD-9 codes for acute GIB, 189 patients with acute GIB and all available data variables required to develop and test models were identified from a hospital medical records database. Data on 122 patients was utilized for development of the model and on 67 patients utilized to perform comparative analysis of the models. Clinical data such as presenting signs and symptoms, demographic data, presence of co-morbidities, laboratory data and corresponding endoscopic diagnosis and outcomes were collected. Clinical data and endoscopic diagnosis collected for each patient was utilized to retrospectively ascertain optimal management for each patient. Clinical presentations and corresponding treatment was utilized as training examples. Eight mathematical models including artificial neural network (ANN), support vector machine (SVM), k-nearest neighbor, linear discriminant analysis (LDA), shrunken centroid (SC), random forest (RF), logistic regression, and boosting were trained and tested. The performance of these models was compared using standard statistical analysis and ROC curves. RESULTS Overall the random forest model best predicted the source, need for resuscitation, and disposition with accuracies of approximately 80% or higher (accuracy for endoscopy was greater than 75%). The area under ROC curve for RF was greater than 0.85, indicating excellent performance by the random forest model. CONCLUSION While most mathematical models are effective as a decision support system for evaluation and management of patients with acute GIB, in our testing, the RF model consistently demonstrated the best performance. Amongst patients presenting with acute GIB, mathematical models may facilitate the identification of the source of GIB, need for intervention and allow optimization of care and healthcare resource allocation; these however require further validation.


Biometrics | 1994

TREE-STRUCTURED PROPORTIONAL HAZARDS REGRESSION MODELING

Hongshik Ahn; Wei-Yin Loh

A method for fitting piecewise proportional hazards models to censored survival data is described. Stratification is performed recursively, using a combination of statistical tests and residual analysis. The bootstrap is employed to keep the probability of a Type I error (the error of discovering two or more strata when there is only one) of the method close to a predetermined value. The proposed method can thus also serve as a formal goodness-of-fit test for the proportional hazards model. Real and simulated data are used for illustration.


Sar and Qsar in Environmental Research | 2006

Decision threshold adjustment in class prediction

James J. Chen; Chen-An Tsai; Hojin Moon; Hongshik Ahn; J. J. Young; Chun-Houh Chen

Standard classification algorithms are generally designed to maximize the number of correct predictions (concordance). The criterion of maximizing the concordance may not be appropriate in certain applications. In practice, some applications may emphasize high sensitivity (e.g., clinical diagnostic tests) and others may emphasize high specificity (e.g., epidemiology screening studies). This paper considers effects of the decision threshold on sensitivity, specificity, and concordance for four classification methods: logistic regression, classification tree, Fishers linear discriminant analysis, and a weighted k-nearest neighbor. We investigated the use of decision threshold adjustment to improve performance of either sensitivity or specificity of a classifier under specific conditions. We conducted a Monte Carlo simulation showing that as the decision threshold increases, the sensitivity decreases and the specificity increases; but, the concordance values in an interval around the maximum concordance are similar. For specified sensitivity and specificity levels, an optimal decision threshold might be determined in an interval around the maximum concordance that meets the specified requirement. Three example data sets were analyzed for illustrations.


Drug Information Journal | 1997

Shelf-Life Estimation for Multifactor Stability Studies*

James J. Chen; Hongshik Ahn; Yi Tsong

In stability analysis, the current Food and Drug Administration (FDA) recommended procedure for estimating the expiration dating period (shelf-life) of a drug is limited to a single package, single strength product. Since most drug products are manufactured with more than one strength and are marketed in more than one package, stability analyses must be carried out for every combination of package and/or strength. This paper proposes a generalization of the current FDA procedure to analyze the stability data from a multiple package and/or strength study. Monte Carlo simulation was used to address some issues with the current procedure and evaluate the proposed generalization procedure. The proposed procedure is illustrated by an application to a data set consisting of five batches and two packages. Statistical issues and problems with the current approach of concern to industrial statisticians and the generalization are also discussed.


Genome Biology | 2006

Classification methods for the development of genomic signatures from high-dimensional data

Hojin Moon; Hongshik Ahn; Ralph L. Kodell; Chien-Ju Lin; Songjoon Baek; James J. Chen

Personalized medicine is defined by the use of genomic signatures of patients to assign effective therapies. We present Classification by Ensembles from Random Partitions (CERP) for class prediction and apply CERP to genomic data on leukemia patients and to genomic data with several clinical variables on breast cancer patients. CERP performs consistently well compared to the other classification algorithms. The predictive accuracy can be improved by adding some relevant clinical/histopathological measurements to the genomic data.


Fertility and Sterility | 2011

Protection from scrotal hyperthermia in laptop computer users

Yefim Sheynkin; Andrew Winer; Farshid Hajimirzaee; Hongshik Ahn; Kyewon Lee

OBJECTIVE To evaluate methods of prevention of scrotal hyperthermia in laptop computer (LC) users. DESIGN Experimental study. SETTING University hospital. PATIENT(S) Twenty-nine healthy male volunteers. INTERVENTION(S) Right and left scrotal temperature and LC and lap pad temperatures were recorded during three separate 60-minute sessions using a working LC in a laptop position: session 1, sitting with closely approximated legs; session 2, sitting with closely approximated legs with a lap pad below the working LC; and session 3, sitting with legs apart at a 70°angle with a lap pad below the working LC. MAIN OUTCOME MEASURE(S) Scrotal temperature elevation. RESULT(S) Scrotal temperature increased significantly regardless of leg position or use of a lap pad. However, it was significantly lower in session 3 (1.41 °C ± 0.66 °C on the left and 1.47 °C ± 0.62 °C on the right) than in session 2 (2.18 °C ± 0.69 °C and 2.06 °C ± 0.72 °C) or session 1 (2.31 °C ± 0.96 °C and 2.56 °C ± 0.91 °C). A scrotal temperature elevation of 1 °C was reached at 11 minutes in session 1, 14 minutes in session 2, and 28 minutes in session 3. CONCLUSION(S) Sitting position with closely approximated legs is the major cause of scrotal hyperthermia. Scrotal shielding with a lap pad does not protect from scrotal temperature elevation. Prevention of scrotal hyperthermia in LC users presently is not feasible. However, scrotal hyperthermia may be reduced by a modified sitting position (legs apart) and significantly shorter use of LC.

Collaboration


Dive into the Hongshik Ahn's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ralph L. Kodell

University of Arkansas for Medical Sciences

View shared research outputs
Top Co-Authors

Avatar

Hojin Moon

Chungnam National University

View shared research outputs
Top Co-Authors

Avatar

Songjoon Baek

National Center for Toxicological Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruce A. Pearce

National Center for Toxicological Research

View shared research outputs
Top Co-Authors

Avatar

Jungnam Joo

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar

Chien-Ju Lin

National Center for Toxicological Research

View shared research outputs
Top Co-Authors

Avatar

Frank Lombardo

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

J. Jack Lee

University of Texas MD Anderson Cancer Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge