Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ming-Ching Chang is active.

Publication


Featured researches published by Ming-Ching Chang.


international conference on computer vision | 2011

Probabilistic group-level motion analysis and scenario recognition

Ming-Ching Chang; Nils Krahnstoever; Weina Ge

This paper addresses the challenge of recognizing behavior of groups of individuals in unconstraint surveillance environments. As opposed to approaches that rely on agglomerative or decisive hierarchical clustering techniques, we propose to recognize group interactions without making hard decisions about the underlying group structure. Instead we use a probabilistic grouping strategy evaluated from the pairwise spatial-temporal tracking information. A path-based grouping scheme determines a soft segmentation of groups and produces a weighted connection graph where its edges express the probability of individuals belonging to a group. Without further segmenting this graph, we show how a large number of low- and high-level behavior recognition tasks can be performed. Our work builds on a mature multi-camera multi-target person tracking system that operates in real-time. We derive probabilistic models to analyze individual track motion as well as group interactions. We show that the soft grouping can combine with motion analysis elegantly to robustly detect and predict group-level activities. Experimental results demonstrate the efficacy of our approach.


advanced video and signal based surveillance | 2010

Group Level Activity Recognition in Crowded Environments across Multiple Cameras

Ming-Ching Chang; Nils Krahnstoever; Ser-Nam Lim; Ting Yu

Environments such as schools, public parks and prisonsand others that contain a large number of people are typi-cally characterized by frequent and complex social interac-tions. In order to identify activities and behaviors in suchenvironments, it is necessary to understand the interactionsthat take place at a group level. To this end, this paper ad-dresses the problem of detecting and predicting suspiciousand in particular aggressive behaviors between groups ofindividuals such as gangs in prison yards. The work buildson a mature multi-camera multi-target person tracking sys-tem that operates in real-time and has the ability to han-dle crowded conditions. We consider two approaches forgrouping individuals: (i) agglomerative clustering favoredby the computer vision community, as well as (ii) decisiveclustering based on the concept of modularity, which is fa-vored by the social network analysis community. We showthe utility of such grouping analysis towards the detectionof group activities of interest. The presented algorithm isintegrated with a system operating in real-time to success-fully detect highly realistic aggressive behaviors enacted bycorrectional officers in a simulated prison environment. Wepresent results from these enactments that demonstrate theefficacy of our approach.


international symposium on 3d data processing visualization and transmission | 2004

3D shape registration using regularized medial scaffolds

Ming-Ching Chang; Frederic Fol Leymarie; Benjamin B. Kimia

This work proposes a method for global registration based on matching 3D medial structures of unorganized point clouds or triangulated meshes. Most practical known methods are based on the iterative closest point (ICP) algorithm, which requires an initial alignment close to the globally optimal solution to ensure convergence to a valid solution. Furthermore, it can also fail when there are points in one dataset with no corresponding matches in the other dataset. The proposed method automatically finds an initial alignment close to the global optimal by using the medial structure of the datasets. For this purpose, we first compute the medial scaffold of a 3D dataset: a 3D graph made of special shock curves linking special shock nodes. This medial scaffold is then regularized exploiting the known transitions of the 3D medial axis under deformation or perturbation of the input data. The resulting simplified medial scaffolds are then registered using a modified graduated assignment graph matching algorithm. The proposed method shows robustness to noise, shape deformations, and varying surface sampling densities.


Magnetic Resonance Imaging | 2015

Quantitative pharmacokinetic analysis of prostate cancer DCE-MRI at 3T: comparison of two arterial input functions on cancer detection with digitized whole mount histopathological validation.

Fiona M. Fennessy; Andriy Fedorov; Tobias Penzkofer; Kyung Won Kim; Michelle S. Hirsch; Mark G. Vangel; Paul Masry; Trevor A. Flood; Ming-Ching Chang; Clare M. Tempany; Robert V. Mulkern; Sandeep N. Gupta

Accurate pharmacokinetic (PK) modeling of dynamic contrast enhanced MRI (DCE-MRI) in prostate cancer (PCa) requires knowledge of the concentration time course of the contrast agent in the feeding vasculature, the so-called arterial input function (AIF). The purpose of this study was to compare AIF choice in differentiating peripheral zone PCa from non-neoplastic prostatic tissue (NNPT), using PK analysis of high temporal resolution prostate DCE-MRI data and whole-mount pathology (WMP) validation. This prospective study was performed in 30 patients who underwent multiparametric endorectal prostate MRI at 3.0T and WMP validation. PCa foci were annotated on WMP slides and MR images using 3D Slicer. Foci ≥0.5cm(3) were contoured as tumor regions of interest (TROIs) on subtraction DCE (early-arterial - pre-contrast) images. PK analyses of TROI and NNPT data were performed using automatic AIF (aAIF) and model AIF (mAIF) methods. A paired t-test compared mean and 90th percentile (p90) PK parameters obtained with the two AIF approaches. Receiver operating characteristic (ROC) analysis determined diagnostic accuracy (DA) of PK parameters. Logistic regression determined correlation between PK parameters and histopathology. Mean TROI and NNPT PK parameters were higher using aAIF vs. mAIF (p<0.05). There was no significant difference in DA between AIF methods: highest for p90 volume transfer constant (K(trans)) (aAIF differences in the area under the ROC curve (Az) = 0.827; mAIF Az=0.93). Tumor cell density correlated with aAIF K(trans) (p=0.03). Our results indicate that DCE-MRI using both AIF methods is excellent in discriminating PCa from NNPT. If quantitative DCE-MRI is to be used as a biomarker in PCa, the same AIF method should be used consistently throughout the study.


workshop on applications of computer vision | 2012

Group context learning for event recognition

Yimeng Zhang; Weina Ge; Ming-Ching Chang; Xiaoming Liu

We address the problem of group-level event recognition from videos. The events of interest are defined based on the motion and interaction of members in a group over time. Example events include group formation, dispersion, following, chasing, flanking, and fighting. To recognize these complex group events, we propose a novel approach that learns the group-level scenario context from automatically extracted individual trajectories. We first perform a group structure analysis to produce a weighted graph that represents the probabilistic group membership of the individuals. We then extract features from this graph to capture the motion and action contexts among the groups. The features are represented using the “bag-of-words” scheme. Finally, our method uses the learned Support Vector Machine (SVM) to classify a video segment into the six event categories. Our implementation builds upon a mature multi-camera multi-target tracking system that recognizes the group-level events involving up to 20 individuals in real-time.


International Journal of Pattern Recognition and Artificial Intelligence | 2001

FAST SEARCH ALGORITHMS FOR INDUSTRIAL INSPECTION

Ming-Ching Chang; Chiou-Shann Fuh; Hsien-Yei Chen

This paper presents an efficient general purpose search algorithm for alignment and an applied procedure for IC print mark quality inspection. The search algorithm is based on normalized cross-correlation and enhances it with a hierarchical resolution pyramid, dynamic programming, and pixel over-sampling to achieve subpixel accuracy on one or more targets. The general purpose search procedure is robust with respect to linear change of image intensity and thus can be applied to general industrial visual inspection. Accuracy, speed, reliability, and repeatability are all critical for the industrial use. After proper optimization, the proposed procedure was tested on the IC inspection platforms in the Mechanical Industry Research Laboratories (MIRL), Industrial Technology Research Institute (ITRI), Taiwan. The proposed method meets all these criteria and has worked well in field tests on various IC products.


Proceedings of SPIE | 2010

A skull stripping method using deformable surface and tissue classification

Xiaodong Tao; Ming-Ching Chang

Many neuroimaging applications require an initial step of skull stripping to extract the cerebrum, cerebellum, and brain stem. We approach this problem by combining deformable surface models and a fuzzy tissue classification technique. Our assumption is that contrast exists between brain tissue (gray matter and white matter) and cerebrospinal fluid, which separates the brain from the extra-cranial tissue. We first analyze the intensity of the entire image to find an approximate centroid of the brain and initialize an ellipsoidal surface around it. We then perform a fuzzy tissue classification with bias field correction within the surface. Tissue classification and bias field are extrapolated to the entire image. The surface iteratively deforms under a force field computed from the tissue classification and the surface smoothness. Because of the bias field correction and tissue classification, the proposed algorithm depends less on particular imaging contrast and is robust to inhomogeneous intensity often observed in magnetic resonance images. We tested the algorithm on all T1 weighted images in the OASIS database, which includes skull stripping results using Brain Extraction Tool; the Dice scores have an average of 0.948 with a standard deviation of 0.017, indicating a high degree of agreement. The algorithm takes on average 2 minutes to run on a typical PC and produces a brain mask and membership functions for gray matter, white matter, and cerebrospinal fluid. We also tested the algorithm on T2 images to demonstrate its generality, where the same algorithm without parameter adjustment gives satisfactory results.


computer vision and pattern recognition | 2008

Regularizing 3D medial axis using medial scaffold transforms

Ming-Ching Chang; Benjamin B. Kimia

This paper addresses a key bottleneck in the use of the 3D medial axis (MA) representation, namely, how the complex MA structure can be regularized so that similar, within-category 3D shapes yield similar 3D MA that are distinct from the non-category shapes. We rely on previous work which (i) constructs a hierarchical MA hypergraph, the medial scaffold (MS), and (ii) the theoretical classification of the instabilities of this structure, or transitions (sudden topological changes due to a small perturbation). The shapes at transition point are degenerate. Our approach is to recognize the transitions which are close-by to a given shape and transform the shape to this transition point, and repeat until no close-by transitions exists. This move towards degeneracy is the basis of simplification of shape. We derive 11 transforms from 7 transitions and follow a greedy scheme in applying the transform. The results show that the simplified MA preserves with-in-category similarity, thus indicating its potential use in various applications including shape analysis, manipulation, and matching.


ieee international conference on automatic face gesture recognition | 2015

Modeling transition patterns between events for temporal human action segmentation and classification

Yelin Kim; Jixu Chen; Ming-Ching Chang; Xin Wang; Emily Mower Provost; Siwei Lyu

We propose a temporal segmentation and classification method that accounts for transition patterns between events of interest. We apply this method to automatically detect salient human action events from videos. A discriminative classifier (e.g., Support Vector Machine) is used to recognize human action events and an efficient dynamic programming algorithm is used to jointly determine the starting and ending temporal segments of recognized human actions. The key difference from previous work is that we introduce the modeling of two kinds of event transition information, namely event transition segments, which capture the occurrence patterns between two consecutive events of interest, and event transition probabilities, which model the transition probability between the two events. Experimental results show that our approach significantly improves the segmentation and recognition performance for the two datasets we tested, in which distinctive transition patterns between events exist.


workshop on applications of computer vision | 2011

Tracking gaze direction from far-field surveillance cameras

Karthik Sankaranarayanan; Ming-Ching Chang; Nils Krahnstoever

We present a real-time approach to estimating the gaze direction of multiple individuals using a network of far-field surveillance cameras. This work is part of a larger surveillance system that utilizes a network of fixed cameras as well as PTZ cameras to perform site-wide tracking of individuals. Based on the tracking information, one or more PTZ cameras are cooperatively controlled to obtain close-up facial images of individuals. Within these close-up shots, face detection and head pose estimation are performed and the results are provided back to the tracking system to track the individual gazes. A new cost metric based on location and gaze orientation is proposed to robustly associate head observations with tracker states. The tracking system can thus leverage the newly obtained gaze information for two purposes: (i) improve the localization of individuals in crowded settings, and (ii) aid high-level surveillance tasks such as understanding gesturing, interactions between individuals, and finding the object-of-interest that people are looking at. In security application, our system can detect if a subject is looking at the security cameras or guard posts.

Collaboration


Dive into the Ming-Ching Chang's collaboration.

Top Co-Authors

Avatar

Siwei Lyu

State University of New York System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Honggang Qi

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Longyin Wen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lipeng Ke

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dawei Du

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge