Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jihun Hamm is active.

Publication


Featured researches published by Jihun Hamm.


international conference on machine learning | 2008

Grassmann discriminant analysis: a unifying view on subspace-based learning

Jihun Hamm; Daniel D. Lee

In this paper we propose a discriminant learning framework for problems in which data consist of linear subspaces instead of vectors. By treating subspaces as basic elements, we can make learning algorithms adapt naturally to the problems with linear invariant structures. We propose a unifying view on the subspace-based learning method by formulating the problems on the Grassmann manifold, which is the set of fixed-dimensional linear subspaces of a Euclidean space. Previous methods on the problem typically adopt an inconsistent strategy: feature extraction is performed in the Euclidean space while non-Euclidean distances are used. In our approach, we treat each sub-space as a point in the Grassmann space, and perform feature extraction and classification in the same space. We show feasibility of the approach by using the Grassmann kernel functions such as the Projection kernel and the Binet-Cauchy kernel. Experiments with real image databases show that the proposed method performs well compared with state-of-the-art algorithms.


Medical Image Analysis | 2010

GRAM: A framework for geodesic registration on anatomical manifolds.

Jihun Hamm; Dong Hye Ye; Ragini Verma; Christos Davatzikos

Medical image registration is a challenging problem, especially when there is large anatomical variation in the anatomies. Geodesic registration methods have been proposed to solve the large deformation registration problem. However, analytically defined geodesic paths may not coincide with biologically plausible paths of registration, since the manifold of diffeomorphisms is immensely broader than the manifold spanned by diffeomorphisms between real anatomies. In this paper, we propose a novel framework for large deformation registration using the learned manifold of anatomical variation in the data. In this framework, a large deformation between two images is decomposed into a series of small deformations along the shortest path on an empirical manifold that represents anatomical variation. Using a manifold learning technique, the major variation of the data can be visualized by a low-dimensional embedding, and the optimal group template is chosen as the geodesic mean on the manifold. We demonstrate the advantages of the proposed framework over direct registration with both simulated and real databases of brain images.


Journal of Neuroscience Methods | 2011

Automated Facial Action Coding System for Dynamic Analysis of Facial Expressions in Neuropsychiatric Disorders

Jihun Hamm; Christian G. Kohler; Ruben C. Gur; Ragini Verma

Facial expression is widely used to evaluate emotional impairment in neuropsychiatric disorders. Ekman and Friesens Facial Action Coding System (FACS) encodes movements of individual facial muscles from distinct momentary changes in facial appearance. Unlike facial expression ratings based on categorization of expressions into prototypical emotions (happiness, sadness, anger, fear, disgust, etc.), FACS can encode ambiguous and subtle expressions, and therefore is potentially more suitable for analyzing the small differences in facial affect. However, FACS rating requires extensive training, and is time consuming and subjective thus prone to bias. To overcome these limitations, we developed an automated FACS based on advanced computer science technology. The system automatically tracks faces in a video, extracts geometric and texture features, and produces temporal profiles of each facial muscle movement. These profiles are quantified to compute frequencies of single and combined Action Units (AUs) in videos, and they can facilitate a statistical study of large populations in disorders known to impact facial expression. We derived quantitative measures of flat and inappropriate facial affect automatically from temporal AU profiles. Applicability of the automated FACS was illustrated in a pilot study, by applying it to data of videos from eight schizophrenia patients and controls. We created temporal AU profiles that provided rich information on the dynamics of facial muscle movements for each subject. The quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity.


medical image computing and computer assisted intervention | 2009

Efficient Large Deformation Registration via Geodesics on a Learned Manifold of Images

Jihun Hamm; Christos Davatzikos; Ragini Verma

Geodesic registration methods have been used to solve the large deformation registration problems, which are hard to solve with conventional registration methods. However, analytically defined geodesics may not coincide with anatomically optimal paths of registration. In this paper we propose a novel and efficient method for large deformation registration by learning the underlying structure of the data using a manifold learning technique. In this method a large deformation between two images is decomposed into a series of small deformations along the shortest path on the graph that approximates the metric structure of data. Furthermore, the graph representation allows us to estimate the optimal group template by minimizing geodesic distances. We demonstrate the advantages of the proposed method with synthetic 2D images and real 3D mice brain volumes.


mobile computing, applications, and services | 2012

Automatic Annotation of Daily Activity from Smartphone-Based Multisensory Streams

Jihun Hamm; Benjamin Stone; Mikhail Belkin; Simon Dennis

We present a system for automatic annotation of daily experience from multisensory streams on smartphones. Using smartphones as platform facilitates collection of naturalistic daily activity, which is difficult to collect with multiple on-body sensors or array of sensors affixed to indoor locations. However, recognizing daily activities in unconstrained settings is more challenging than in controlled environments: 1) multiples heterogeneous sensors equipped in smartphones are noisier, asynchronous, vary in sampling rates and can have missing data; 2) unconstrained daily activities are continuous, can occur concurrently, and have fuzzy onset and offset boundaries; 3) ground-truth labels obtained from the user’s self-report can be erroneous and accurate only in a coarse time scale. To handle these problems, we present in this paper a flexible framework for incorporating heterogeneous sensory modalities combined with state-of-the-art classifiers for sequence labeling. We evaluate the system with real-life data containing 11721 minutes of multisensory recordings, and demonstrate the accuracy and efficiency of the proposed system for practical lifelogging applications.


workshop on applications of computer vision | 2011

Personalized video summarization with human in the loop

Bohyung Han; Jihun Hamm; Jack Sim

In automatic video summarization, visual summary is constructed typically based on the analysis of low-level features with little consideration of video semantics. However, the contextual and semantic information of a video is marginally related to low-level features in practice although they are useful to compute visual similarity between frames. Therefore, we propose a novel video summarization technique, where the semantically important information is extracted from a set of keyframes given by human and the summary of a video is constructed based on the automatic temporal segmentation using the analysis of inter-frame similarity to the keyframes. Toward this goal, we model a video sequence with a dissimilarity matrix based on bidirectional similarity measure between every pair of frames, and subsequently characterize the structure of the video by a nonlinear manifold embedding. Then, we formulate video summarization as a variant of the 0–1 knapsack problem, which is solved by dynamic programming efficiently. The effectiveness of our algorithm is illustrated quantitatively and qualitatively using realistic videos collected from YouTube.


medical image computing and computer assisted intervention | 2012

Regional Manifold Learning for Deformable Registration of Brain MR Images

Dong Hye Ye; Jihun Hamm; Dongjin Kwon; Christos Davatzikos; Kilian M. Pohl

We propose a method for deformable registration based on learning the manifolds of individual brain regions. Recent publications on registration of medical images advocate the use of manifold learning in order to confine the search space to anatomically plausible deformations. Existing methods construct manifolds based on a single metric over the entire image domain thus frequently miss regional brain variations. We address this issue by first learning manifolds for specific regions and then computing region-specific deformations from these manifolds. We then determine deformations for the entire image domain by learning the global manifold in such a way that it preserves the region-specific deformations. We evaluate the accuracy of our method by applying it to the LPBA40 dataset and measuring the overlap of the deformed segmentations. The result shows significant improvement in registration accuracy on cortex regions compared to other state of the art methods.


international conference on distributed computing systems | 2015

Crowd-ML: A Privacy-Preserving Learning Framework for a Crowd of Smart Devices

Jihun Hamm; Adam C. Champion; Guoxing Chen; Mikhail Belkin; Dong Xuan

Smart devices with built-in sensors, computational capabilities, and network connectivity have become increasingly pervasive. Crowds of smart devices offer opportunities to collectively sense and perform computing tasks at an unprecedented scale. This paper presents Crowd-ML, a privacy-preserving machine learning framework for a crowd of smart devices, which can solve a wide range of learning problems for crowd sensing data with differential privacy guarantees. Crowd-ML endows a crowd sensing system with the ability to learn classifiers or predictors online from crowd sensing data privately with minimal computational overhead on devices and servers, suitable for practical large-scale use of the framework. We analyze the performance and scalability of Crowd-ML and implement the system with off-the-shelf smartphones as a proof of concept. We demonstrate the advantages of Crowd-ML with real and simulated experiments under various conditions.


IEEE Transactions on Medical Imaging | 2014

Regional Manifold Learning for Disease Classification

Dong Hye Ye; Benoit Desjardins; Jihun Hamm; Harold I. Litt; Kilian M. Pohl

While manifold learning from images itself has become widely used in medical image analysis, the accuracy of existing implementations suffers from viewing each image as a single data point. To address this issue, we parcellate images into regions and then separately learn the manifold for each region. We use the regional manifolds as low-dimensional descriptors of high-dimensional morphological image features, which are then fed into a classifier to identify regions affected by disease. We produce a single ensemble decision for each scan by the weighted combination of these regional classification results. Each weight is determined by the regional accuracy of detecting the disease. When applied to cardiac magnetic resonance imaging of 50 normal controls and 50 patients with reconstructive surgery of Tetralogy of Fallot, our method achieves significantly better classification accuracy than approaches learning a single manifold across the entire image domain.


Schizophrenia Research and Treatment | 2014

Dimensional Information-Theoretic Measurement of Facial Emotion Expressions in Schizophrenia

Jihun Hamm; Amy E. Pinkham; Ruben C. Gur; Ragini Verma; Christian G. Kohler

Altered facial expressions of emotions are characteristic impairments in schizophrenia. Ratings of affect have traditionally been limited to clinical rating scales and facial muscle movement analysis, which require extensive training and have limitations based on methodology and ecological validity. To improve reliable assessment of dynamic facial expression changes, we have developed automated measurements of facial emotion expressions based on information-theoretic measures of expressivity of ambiguity and distinctiveness of facial expressions. These measures were examined in matched groups of persons with schizophrenia (n = 28) and healthy controls (n = 26) who underwent video acquisition to assess expressivity of basic emotions (happiness, sadness, anger, fear, and disgust) in evoked conditions. Persons with schizophrenia scored higher on ambiguity, the measure of conditional entropy within the expression of a single emotion, and they scored lower on distinctiveness, the measure of mutual information across expressions of different emotions. The automated measures compared favorably with observer-based ratings. This method can be applied for delineating dynamic emotional expressivity in healthy and clinical populations.

Collaboration


Dive into the Jihun Hamm's collaboration.

Top Co-Authors

Avatar

Daniel D. Lee

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Dong Hye Ye

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ragini Verma

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruben C. Gur

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Yung-Kyun Noh

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Benoit Desjardins

Hospital of the University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge