Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hongkai Yu is active.

Publication


Featured researches published by Hongkai Yu.


Iet Intelligent Transport Systems | 2014

Video-based traffic data collection system for multiple vehicle types

Shuguang Li; Hongkai Yu; Jingru Zhang; Kaixin Yang; Ran Bin

Traffic data of multiple vehicle types are important for pavement design, traffic operations and traffic control. For the complex traffic in developing countries, no suitable tool is available for collecting such data. Considering the importance of these data and problems of traffic data collection, a new video-based traffic data collection system for multiple vehicle types is developed. This image processing-based system can track and classify vehicles under mixed traffic conditions. The type and speed of every passing vehicle are recognized. It is worth mentioning that this system takes the cross-lane vehicles into consideration and takes some measures to reduce the classification errors caused by vehicle occlusions. Finally, the flows and mean speeds of multiple vehicle types are output. This system was tested under four different weather conditions. The accuracy of vehicle counting was 97.4% and the error of vehicle classification was 8.3%. The correlation coefficient of speeds detected by this system and radar gun was 0.898 and the mean error of speed detection by this system was only 2.3 km/h. This indicates that this system is reliable for collecting traffic data of multiple vehicle types.


international conference on computer vision | 2015

Co-Interest Person Detection from Multiple Wearable Camera Videos

Yuewei Lin; Kareem Abdelfatah; Youjie Zhou; Xiaochuan Fan; Hongkai Yu; Hui Qian; Song Wang

Wearable cameras, such as Google Glass and Go Pro, enable video data collection over larger areas and from different views. In this paper, we tackle a new problem of locating the co-interest person (CIP), i.e., the one who draws attention from most camera wearers, from temporally synchronized videos taken by multiple wearable cameras. Our basic idea is to exploit the motion patterns of people and use them to correlate the persons across different videos, instead of performing appearance-based matching as in traditional video co-segmentation/localization. This way, we can identify CIP even if a group of people with similar appearance are present in the view. More specifically, we detect a set of persons on each frame as the candidates of the CIP and then build a Conditional Random Field (CRF) model to select the one with consistent motion patterns in different videos and high spacial-temporal consistency in each video. We collect three sets of wearable-camera videos for testing the proposed algorithm. All the involved people have similar appearances in the collected videos and the experiments demonstrate the effectiveness of the proposed algorithm.


international conference on image processing | 2014

Unsupervised co-segmentation based on a new global GMM constraint in MRF

Hongkai Yu; Min Xian; Xiaojun Qi

This paper proposes a new Markov Random Fields (MRF) optimization model for co-segmentation. The co-saliency model is incorporated into our model to make it fully unsupervised and work well for images with similar backgrounds. The Gaussian Mixture Model (GMM) based dissimilarity between foregrounds in each image and the common objects in the set is involved as a new global constraint (i.e., energy term) in our model. Finally, we introduce an alternative approximation to represent the energy function, which could be minimized by Graph Cuts iteratively. The experimental results on two datasets show that our algorithm achieves better or comparable accuracy when comparing with state-of-the-art algorithms.


computer vision and pattern recognition | 2016

Groupwise Tracking of Crowded Similar-Appearance Targets from Low-Continuity Image Sequences

Hongkai Yu; Youjie Zhou; Jeff P. Simmons; Craig Przybyla; Yuewei Lin; Xiaochuan Fan; Yang Mi; Song Wang

Automatic tracking of large-scale crowded targets are of particular importance in many applications, such as crowded people/vehicle tracking in video surveillance, fiber tracking in materials science, and cell tracking in biomedical imaging. This problem becomes very challenging when the targets show similar appearance and the interslice/ inter-frame continuity is low due to sparse sampling, camera motion and target occlusion. The main challenge comes from the step of association which aims at matching the predictions and the observations of the multiple targets. In this paper we propose a new groupwise method to explore the target group information and employ the within-group correlations for association and tracking. In particular, the within-group association is modeled by a nonrigid 2D Thin-Plate transform and a sequence of group shrinking, group growing and group merging operations are then developed to refine the composition of each group. We apply the proposed method to track large-scale fibers from microscopy material images and compare its performance against several other multi-target tracking methods. We also apply the proposed method to track crowded people from videos with poor inter-frame continuity.


computer vision and pattern recognition | 2016

Identifying Same Persons from Temporally Synchronized Videos Taken by Multiple Wearable Cameras

Kang Zheng; Hao Guo; Xiaochuan Fan; Hongkai Yu; Song Wang

Video-based human action recognition benefits from multiple cameras which can provide temporally synchronized, multi-view videos. Cross-video person identification, i.e., determining whether at a given time, persons tracked in different videos are the same person or not, is a key step to integrate such multi-view information for collaborative action recognition. For fixed cameras, this step is relatively easy since different cameras can be precalibrated. In this paper, we study cross-video person identification for wearable cameras, which are constantly moving with the wearers. Specifically, we take the tracked persons from different videos to be the same person if their 3D poses are the same, given that these videos are synchronized. We adapt an existing algorithm to estimate the tracked persons 3D poses in each 2D video using motionbased features. Experiments show that, although 3D pose estimation is not perfect, the proposed method can still lead to better cross-video person identification than using appearance information.


BMC Medical Imaging | 2015

Automated lesion detection on MRI scans using combined unsupervised and supervised methods

Dazhou Guo; Julius Fridriksson; Paul Fillmore; Chris Rorden; Hongkai Yu; Kang Zheng; Song Wang

BackgroundAccurate and precise detection of brain lesions on MR images (MRI) is paramount for accurately relating lesion location to impaired behavior. In this paper, we present a novel method to automatically detect brain lesions from a T1-weighted 3D MRI. The proposed method combines the advantages of both unsupervised and supervised methods.MethodsFirst, unsupervised methods perform a unified segmentation normalization to warp images from the native space into a standard space and to generate probability maps for different tissue types, e.g., gray matter, white matter and fluid. This allows us to construct an initial lesion probability map by comparing the normalized MRI to healthy control subjects. Then, we perform non-rigid and reversible atlas-based registration to refine the probability maps of gray matter, white matter, external CSF, ventricle, and lesions. These probability maps are combined with the normalized MRI to construct three types of features, with which we use supervised methods to train three support vector machine (SVM) classifiers for a combined classifier. Finally, the combined classifier is used to accomplish lesion detection.ResultsWe tested this method using T1-weighted MRIs from 60 in-house stroke patients. Using leave-one-out cross validation, the proposed method can achieve an average Dice coefficient of 73.1 % when compared to lesion maps hand-delineated by trained neurologists. Furthermore, we tested the proposed method on the T1-weighted MRIs in the MICCAI BRATS 2012 dataset. The proposed method can achieve an average Dice coefficient of 66.5 % in comparison to the expert annotated tumor maps provided in MICCAI BRATS 2012 dataset. In addition, on these two test datasets, the proposed method shows competitive performance to three state-of-the-art methods, including Stamatakis et al., Seghier et al., and Sanjuan et al.ConclusionsIn this paper, we introduced a novel automated procedure for lesion detection from T1-weighted MRIs by combining both an unsupervised and a supervised component. In the unsupervised component, we proposed a method to identify lesioned hemisphere to help normalize the patient MRI with lesions and initialize/refine a lesion probability map. In the supervised component, we extracted three different-order statistical features from both the tissue/lesion probability maps obtained from the unsupervised component and the original MRI intensity. Three support vector machine classifiers are then trained for the three features respectively and combined for final voxel-based lesion classification.


international conference on multimedia and expo | 2014

Unsupervised cosegmentation based on superpixel matching and Fastgrabcut

Hongkai Yu; Xiaojun Qi

This paper proposes a novel unsupervised cosegmentation method which automatically segments the common objects in multiple images. It designs a simple superpixel matching algorithm to explore the inter-image similarity. It then constructs the object mask for each image using the matched superpixels. This object mask is a convex hull potentially containing the common objects and some backgrounds. Finally, it applies a new FastGrabCut algorithm, an improved GrabCut algorithm, on the object mask to simultaneously improve the segmentation efficiency and maintain the segmentation accuracy. This FastGrabcut algorithm introduces preliminary classification to accelerate convergence. It uses Expectation Maximization (EM) algorithm to estimate optimal Gaussian Mixture Model(GMM) parameters of the object and background and then applies Graph Cuts to minimize the energy function for each image. Experimental results on the iCoseg dataset demonstrate the accuracy and robustness of our cosegmentation method.


Journal of Electronic Imaging | 2017

Identifying designs from incomplete, fragmented cultural heritage objects by curve-pattern matching

Jun Zhou; Haozhou Yu; Karen Smith; Colin Wilder; Hongkai Yu; Song Wang

Study of cultural-heritage objects with embellished realistic and abstract designs made up of connected and intertwined curves crosscuts a number of related disciplines, including archaeology, art history, and heritage management. However, many objects, such as pottery sherds found in the archaeological record, are fragmentary, making the underlying complete designs unknowable at the scale of the sherd fragment. The challenge to reconstruct and study complete designs is stymied because 1) most fragmentary cultural-heritage objects contain only a small portion of the underlying full design, 2) in the case of a stamping application, the same design may be applied multiple times with spatial overlap on one object, and 3) curve patterns detected on an object are usually incomplete and noisy. As a result, classical curve-pattern matching algorithms, such as Chamfer matching, may perform poorly in identifying the underlying design. In this paper, we develop a new partial-to-global curve matching algorithm to address these challenges and better identify the full design from a fragmented cultural heritage object. Specifically, we develop the algorithm to identify the designs of the carved wooden paddles of the Southeastern Woodlands from unearthed pottery sherds. A set of pottery sherds from the Snow Collection, curated at Georgia Southern University, are used to test the proposed algorithm, with promising results.


IEEE Transactions on Image Processing | 2016

Large-Scale Fiber Tracking Through Sparsely Sampled Image Sequences of Composite Materials

Youjie Zhou; Hongkai Yu; Jeff P. Simmons; Craig Przybyla; Song Wang

Fast and accurate characterization of fiber micro-structures plays a central role for material scientists to analyze physical properties of continuous fiber reinforced composite materials. In materials science, this is usually achieved by continuously cross-sectioning a 3D material sample for a sequence of 2D microscopic images, followed by a fiber detection/tracking algorithm through the obtained image sequence. To speed up this process and be able to handle larger size material samples, this paper proposes sparse sampling with larger inter-slice distance in cross sectioning and develops a new algorithm that can robustly track large-scale fibers from such a sparsely sampled image sequence. In particular, the problem is formulated as multi-target tracking, and the Kalman filters are applied to track each fiber along the image sequence. One main challenge in this tracking process is to correctly associate each fiber to its observation given that: fiber observations are of large scale, crowded, and show very similar appearances in a 2D slice and there may be a large gap between the predicted location of a fiber and its observation in the sparse sampling. To address this challenge, a novel group-wise association algorithm is developed by leveraging the fact that fibers are implanted in bundles and the fibers in the same bundle are highly correlated through the image sequence. In experiments, the proposed algorithm is tested on three tiles of 100-slice S200 material samples and the tracking performance is evaluated using 1136 human annotated ground-truth fiber tracks. Both quantitative and qualitative results show that the proposed algorithm clearly outperforms the state-of-the-art multiple-target tracking algorithms on sparsely sampled image sequences.


Pattern Recognition Letters | 2018

Multiple human tracking in wearable camera videos with informationless intervals

Hongkai Yu; Haozhou Yu; Hao Guo; Jeff P. Simmons; Qin Zou; Wei Feng; Song Wang

Abstract Multiple human tracking plays a key role in video surveillance and human activity detection. Compared to fixed cameras, wearable cameras, such as GoPro and Google Glass, can follow and capture the targets (people of interest) in larger areas and from better view angles, following the motion of camera wearers. However, wearable camera videos suffer from sudden view changes, resulting in informationless (temporal) intervals of target loss, which make multiple human tracking a much more challenging problem. In particular, given large and unknown camera-pose change, it is difficult to associate the multiple targets over such an interval based on the spatial proximity or appearance matching. In this paper, we propose a new approach, where spatial pattern of the multiple targets are extracted, predicted and then leveraged to help associate the targets over an informationless interval. We also propose a classification based algorithm to identify the informationless intervals from wearable camera videos. Experiments are conducted on a new dataset containing 30 wearable-camera videos and the performance is compared to several other multi-target tracking algorithms.

Collaboration


Dive into the Hongkai Yu's collaboration.

Top Co-Authors

Avatar

Song Wang

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Jeff P. Simmons

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Youjie Zhou

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Craig Przybyla

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Hao Guo

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Kang Zheng

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Xiaochuan Fan

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Dazhou Guo

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Yuewei Lin

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Haozhou Yu

University of South Carolina

View shared research outputs
Researchain Logo
Decentralizing Knowledge