Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhaozheng Yin is active.

Publication


Featured researches published by Zhaozheng Yin.


computer vision and pattern recognition | 2007

Belief Propagation in a 3D Spatio-temporal MRF for Moving Object Detection

Zhaozheng Yin; Robert T. Collins

Previous pixel-level change detection methods either contain a background updating step that is costly for moving cameras (background subtraction) or can not locate object position and shape accurately (frame differencing). In this paper we present a belief propagation approach for moving object detection using a 3D Markov random field (MRF) model. Each hidden state in the 3D MRF model represents a pixels motion likelihood and is estimated using message passing in a 6-connected spatio-temporal neighborhood. This approach deals effectively with difficult moving object detection problems like objects camouflaged by similar appearance to the background, or objects with uniform color that frame difference methods can only partially detect. Three examples are presented where moving objects are detected and tracked successfully while handling appearance change, shape change, varied moving speed/direction, scale change and occlusion/clutter.


Medical Image Analysis | 2012

Understanding the phase contrast optics to restore artifact-free microscopy images for segmentation

Zhaozheng Yin; Takeo Kanade; Mei Chen

Phase contrast, a noninvasive microscopy imaging technique, is widely used to capture time-lapse images to monitor the behavior of transparent cells without staining or altering them. Due to the optical principle, phase contrast microscopy images contain artifacts such as the halo and shade-off that hinder image segmentation, a critical step in automated microscopy image analysis. Rather than treating phase contrast microscopy images as general natural images and applying generic image processing techniques on them, we propose to study the optical properties of the phase contrast microscope to model its image formation process. The phase contrast imaging system can be approximated by a linear imaging model. Based on this model and input image properties, we formulate a regularized quadratic cost function to restore artifact-free phase contrast images that directly correspond to the specimens optical path length. With artifacts removed, high quality segmentation can be achieved by simply thresholding the restored images. The imaging model and restoration method are quantitatively evaluated on microscopy image sequences with thousands of cells captured over several days. We also demonstrate that accurate restoration lays the foundation for high performance in cell detection and tracking.


workshop on applications of computer vision | 2008

Likelihood Map Fusion for Visual Object Tracking

Zhaozheng Yin; Fatih Porikli; Robert T. Collins

Visual object tracking can be considered as a figure-ground classification task. In this paper, different features are used to generate a set of likelihood maps for each pixel indicating the probability of that pixel belonging to foreground object or scene background. For example, intensity, texture, motion, saliency and template matching can all be used to generate likelihood maps. We propose a generic likelihood map fusion framework to combine these heterogeneous features into a fused soft segmentation suitable for mean-shift tracking. All the component likelihood maps contribute to the segmentation based on their classification confidence scores (weights) learned from the previous frame. The evidence combination framework dynamically updates the weights such that, in the fused likelihood map, discriminative foreground/background information is preserved while ambiguous information is suppressed. The framework is applied here to track ground vehicles from thermal airborne video, and is also compared to other state-of-the-art algorithms.


international symposium on biomedical imaging | 2011

Reliable cell tracking by global data association

Ryoma Bise; Zhaozheng Yin; Takeo Kanade

Automated cell tracking in populations is important for research and discovery in biology and medicine. In this paper, we propose a cell tracking method based on global spatio-temporal data association which considers hypotheses of initialization, termination, translation, division and false positive in an integrated formulation. Firstly, reliable tracklets (i.e., short trajectories) are generated by linking detection responses based on frame-by-frame association. Next, these tracklets are globally associated over time to obtain final cell trajectories and lineage trees. During global association, tracklets form tree structures where a mother cell divides into two daughter cells. We formulate the global association for tree structures as a maximum-a-posteriori (MAP) problem and solve it by linear programming. This approach is quantitatively evaluated on sequences with thousands of cells captured over several days.


computer vision and pattern recognition | 2006

Moving Object Localization in Thermal Imagery by Forward-backward MHI

Zhaozheng Yin; Robert T. Collins

Detecting moving objects automatically is a key component of an automatic visual surveillance and tracking system. In airborne thermal video, the moving objects may be small, color information is not available, and even intensity appearance may be camouflaged. Previous motionbased moving object detection approaches often use background subtraction, inter-frame difference or three-frame difference. In this paper, we describe a detection and localization method based on forward-backward motion history images (MHI). This method can accurately detect location and shape of moving objects for initializing a tracker. Using long and varied video sequences, we quantify the effectiveness of this method.


multimedia signal processing | 2009

Improving depth perception with motion parallax and its application in teleconferencing

Cha Zhang; Zhaozheng Yin; Dinei A. F. Florêncio

Depth perception, or 3D perception, can add a lot to the feeling of immersiveness in many applications such as 3D TV, 3D teleconferencing, etc. Stereopsis and motion parallax are two of the most important cues for depth perception. Most of the 3D displays today rely on stereopsis to create 3D perception. In this paper, we propose to improve users depth perception by tracking their motions and creating motion parallax for the rendered image, which can be done even with legacy displays. Two enabling technologies, face tracking and foreground/background segmentation, are discussed in detail. In particular, we propose an efficient and robust feature based face tracking algorithm that is capable of estimating the faces location and scale accurately. We also propose a novel foreground/background segmentation and matting algorithm with time-of-flight camera, which is robust to moving background, lighting variations, moving camera, etc. We demonstrate the application of the above technologies in teleconferencing on legacy displays to create pseudo-3D effects.


international symposium on biomedical imaging | 2010

Cell segmentation in microscopy imagery using a bag of local Bayesian classifiers

Zhaozheng Yin; Ryoma Bise; Mei Chen; Takeo Kanade

Cell segmentation in microscopy imagery is essential for many bioimage applications such as cell tracking. To segment cells from the background accurately, we present a pixel classification approach that is independent of cell type or imaging modality. We train a set of Bayesian classifiers from clustered local training image patches. Each Bayesian classifier is an expert to make decision in its specific domain. The decision from the mixture of experts determines how likely a new pixel is a cell pixel. We demonstrate the effectiveness of this approach on four cell types with diverse morphologies under different microscopy imaging modalities.


Medical Image Analysis | 2013

Cell segmentation in phase contrast microscopy images via semi-supervised classification over optics-related features.

Hang Su; Zhaozheng Yin; Seungil Huh; Takeo Kanade

Phase-contrast microscopy is one of the most common and convenient imaging modalities to observe long-term multi-cellular processes, which generates images by the interference of lights passing through transparent specimens and background medium with different retarded phases. Despite many years of study, computer-aided phase contrast microscopy analysis on cell behavior is challenged by image qualities and artifacts caused by phase contrast optics. Addressing the unsolved challenges, the authors propose (1) a phase contrast microscopy image restoration method that produces phase retardation features, which are intrinsic features of phase contrast microscopy, and (2) a semi-supervised learning based algorithm for cell segmentation, which is a fundamental task for various cell behavior analysis. Specifically, the image formation process of phase contrast microscopy images is first computationally modeled with a dictionary of diffraction patterns; as a result, each pixel of a phase contrast microscopy image is represented by a linear combination of the bases, which we call phase retardation features. Images are then partitioned into phase-homogeneous atoms by clustering neighboring pixels with similar phase retardation features. Consequently, cell segmentation is performed via a semi-supervised classification technique over the phase-homogeneous atoms. Experiments demonstrate that the proposed approach produces quality segmentation of individual cells and outperforms previous approaches.


computer vision and pattern recognition | 2009

Shape constrained figure-ground segmentation and tracking

Zhaozheng Yin; Robert T. Collins

Global shape information is an effective top-down complement to bottom-up figure-ground segmentation as well as a useful constraint to avoid drift during adaptive tracking. We propose a novel method to embed global shape information into local graph links in a Conditional Random Field (CRF) framework. Given object shapes from several key frames, we automatically collect a shape dataset on-the-fly and perform statistical analysis to build a collection of deformable shape templates representing global object shape. In new frames, simulated annealing and local voting align the deformable template with the image to yield a global shape probability map. The global shape probability is combined with a region-based probability of object boundary map and the pixel-level intensity gradient to determine each link cost in the graph. The CRF energy is minimized by min-cut, followed by Random Walk on the uncertain boundary region to get a soft segmentation result. Experiments on both medical and natural images with deformable object shapes are demonstrated.


acm multimedia | 2015

Human Activity Recognition Using Wearable Sensors by Deep Convolutional Neural Networks

Wenchao Jiang; Zhaozheng Yin

Human physical activity recognition based on wearable sensors has applications relevant to our daily life such as healthcare. How to achieve high recognition accuracy with low computational cost is an important issue in the ubiquitous computing. Rather than exploring handcrafted features from time-series sensor signals, we assemble signal sequences of accelerometers and gyroscopes into a novel activity image, which enables Deep Convolutional Neural Networks (DCNN) to automatically learn the optimal features from the activity image for the activity recognition task. Our proposed approach is evaluated on three public datasets and it outperforms state-of-the-arts in terms of recognition accuracy and computational cost.

Collaboration


Dive into the Zhaozheng Yin's collaboration.

Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Robert T. Collins

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wenchao Jiang

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Mei Chen

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Mingzhong Li

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yunxiang Mao

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ryoma Bise

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar

Haohan Li

Missouri University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge