Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Baopu Li is active.

Publication


Featured researches published by Baopu Li.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

Robust object tracking with reacquisition ability using online learned detector.

Tianyu Yang; Baopu Li; Max Q.-H. Meng

Long term tracking is a challenging task for many applications. In this paper, we propose a novel tracking approach that can adapt various appearance changes such as illumination, motion, and occlusions, and owns the ability of robust reacquisition after drifting. We utilize a condensation-based method with an online support vector machine as a reliable observation model to realize adaptive tracking. To redetect the target when drifting, a cascade detector based on random ferns is proposed. It can detect the target robustly in real time. After redetection, we also come up with a new refinement strategy to improve the trackers performance by removing the support vectors corresponding to possible wrong updates by a matching template. Extensive comparison experiments on typical and challenging benchmark dataset illustrate a robust and encouraging performance of the proposed approach.


international conference on automation and logistics | 2012

Removal of non-informative frames for wireless capsule endoscopy video segmentation

Zhe Sun; Baopu Li; Ran Zhou; Huimin Zheng; Max Q.-H. Meng

Wireless capsule endoscopy (WCE) video segmentation plays an important part in WCE automatic diagnosis since it provides an effective method to help physicians and save time. In the automatic WCE video segmentation process, impurities frames with opaque digestive juice, food residues and excrement not only waste plentiful time, but also cause a lower accuracy of segmentation for its variation of color and pattern. The major impurities which have great affection for WCE video segmentation can be divided into two categories, gastric juice and bubbles. Thus, in this paper, a novel two-stage preprocessing approach is proposed to remove impurities frames in WCE videos. In the first stage, frames of gastric juice are eliminated by using local HS histogram features. In the second stage, a new approach is carried out to remove the bubbles frames in the WCE video, which combines Color Local Binary Patterns (CLBP) algorithm with Discrete Cosine Transform (DCT). K-Nearest Neighbor (KNN) classifier is used in both stages for its rapidity. Experiments demonstrate that the proposed scheme is an effective approach for removing non-informative frames in WCE video and the accuracies of each stage can reach as high as 99.31% and 97.54% respectively.


international conference on information and automation | 2011

Capsule endoscopy video Boundary Detection

Baopu Li; Max Q.-H. Meng

Capsule endoscopy (CE) is a recently developed new technology which enables direct visualization of the inner tract of the whole small bowel (SB) in human body. Due to such a breakthrough compared to traditional endoscopy imaging modalities, this device with its size close to a small pill has seen its wide application in hospitals since it was approved for marketing in 2001. However, it is reported that the inspection of the video data produced in each test cost a clinician about two hours on average to examine. To mitigate such a burden for physicians, it is necessary to develop automatic video analysis techniques for CE video. Since a CE video has an average length of about 60,000 frames for each test, it may be beneficial to segment such a long video into meaningful parts. In this study, we investigate the possibility of applying video boundary detection methods for this purpose. Color and textural features are utilized to represent the visual content. The CE video boundary detection is then formulated as a problem of finding local maximal value along the dissimilarity curve for a CE video. Since a CE undergoes a chaotic motion originated from peristalsis of the digestive tract, motion analysis is further taken into account to refine the results produced in the above steps. Preliminary experimental results suggest the possible usage of the proposed scheme for CE video segmentation.


international conference on automation and logistics | 2011

Motion analysis for capsule endoscopy video segmentation

Baopu Li; Max Q.-H. Meng; Chao Hu

Capsule endoscopy (CE) is a revolutionary technology that enables physicians to examine the whole digestive tract in human body in a minimally noninvasive manner. However, it is reported that the large amount of video data yielded in each examination produces a troublesome and time consuming task for a clinician and it takes about two hours on average to examine. To reduce such a heavy load for clinicians, automatic video analysis for CE video is desired. Since each CE video contains about 60,000 frames, it may be necessary to divide such a long video into small segments. In this paper, we investigate the possibility of applying motion analysis approaches for this purpose. Two typical motion analysis methods, adaptive root patch search block matching and Bayesian multi-scale differential optical flow, are taken into account to show their ability for CE video segmentation. Experimental results suggest that motion information may be useful for CE video segmentation.


robotics and biomimetics | 2012

Wireless capsule endoscopy video automatic segmentation

Ran Zhou; Baopu Li; Zhe Sun; Chao Hu; Max Q.-H. Meng

Wireless capsule endoscopy (WCE) is an advanced technology that allows diagnosis inside humans digestive tract without invasiveness, however, it is a time-consuming task for clinicians to diagnose due to the large number of frames in video. A novel and efficient algorithm is proposed in this paper to help clinicians segment the WCE video automatically according to stomach, small intestine, and large intestine regions. Firstly, since there are many impurities and bubbles in WCE video frames which add the difficulty of segmentation, a pre-procedure is presented to denote the valid regions in the frames based on color and wavelet texture features. Secondly, the boundaries between adjacent organs of WCE video are estimated in two levels which consist of a rough and a fine level. In the rough level, color feature is utilized to draw a dissimilarity curve between frames and the aim is to find the peak of the curve, which represents the approximate boundary we want to locate. In the fine level, Hue-Saturation histogram color feature in HSI color space and uniform LBP texture feature from grayscale images are extracted. And support vector machine (SVM) classifier is utilized to segment the WCE video into different regions. The experiments demonstrate a promising efficiency of the proposed algorithm and the average precision and recall achieve as high as 94.33% and 89.50% respectively.


intelligent robots and systems | 2011

Comparison of several image features for WCE video abstract

Baopu Li; Max Q.-H. Meng

The direct view of the inner tract of the small intestine is not feasible until a recently revolutionary imaging technology, wireless capsule endoscopy (WCE), appeared in 2001. However, interpretation of the produced video data for the digestive tract on each patient is left to naked eyes of medical staffs. Such a process is very tedious and time-consuming with the average inspection time about two hours for a whole WCE video. To overcome this big problem, automatic WCE video analysis is required. In this paper, we propose a comparative study of several image based features that may be suitable for WCE video abstract, which may be a good candidate to reduce the burden of physicians. Color, texture and motion features built from images are investigated and compared to show their performance in representing video content for a WCE video abstract. Preliminary experimental results of these features for WCE video abstract are also demonstrated and discussed. It is found that textural and motion features may be suitable candidates for visual frame depiction for WCE video abstract in terms of visual content representation and compression ratio. Clinical validation of our work remains to be implemented in the near future.


world congress on intelligent control and automation | 2012

An optimal parking space search model based on fuzzy multiple attribute decision making

Shouyuan Yu; Baopu Li; Qi Zhang; Max Q.-H. Meng

This paper proposes a new method based on multiple attribute decision making to search for the optimal parking space in a parking lot. The advantages and limits of different kinds of property weights are investigated in this paper. By improving the limits of subjective weight and combining subjective weight with objective weight, we get an integrated weight. Using this hybrid weight, we introduce how to apply multiple attribute decision making to search for the optimal parking space. Experiments show that this method has a satisfying performance and can find the real-time optimal parking space. The proposed method can be applied to city parking guidance and information system.


Annals of Biomedical Engineering | 2011

Contourlet-Based Features for Computerized Tumor Detection in Capsule Endoscopy Images

Baopu Li; Max Q.-H. Meng

This article presents a computer-aided detection system for capsule endoscopy (CE) images using contourlet-based color textural features to recognize tumors in the digestive tract. As tumor exhibits rich information in color texture, a novel color texture feature based on contourlet transform is proposed to describe characteristics of tumor in CE images. The proposed features are a hybrid of contourlet transform and uniform local binary pattern, yielding detailed and robust color texture features in multi-directions for CE images. Sequential floating forward search approach is further applied to refine the proposed features. With support vector machine for classification, comprehensive experiments on our present data reveal an encouraging accuracy of 93.6% for tumor detection in CE images using the proposed features.


world congress on intelligent control and automation | 2012

A comparative study of endoscopic polyp detection by textural features

Baopu Li; Max Q.-H. Meng; Chao Hu

Digestive tract cancer is a big threat to human and capsule endoscopy (CE) is a relatively new technology to detect the diseases in the small bowel. Since polyp is an important symptom of digestive cancer it is important to detect them by computerized methods. In this work, we comparatively investigate computer aided detection for polyps by machine learning based methods that are built upon color textural features. Four textural features, wavelet based features, color wavelet covariance, rotation invariant uniform local binary pattern and complete local binary pattern, are utilized to characterize the textural features in CE images, and performance of them are extensively studied in three different color spaces, that is, RGB, HSI and Lab color spaces.


international conference on robotics and automation | 2012

A novel correspondence searching strategy in multiocular vision

Ning Wei; Baopu Li; Qing He; Chao Hu; Max Q.-H. Meng

Correspondence searching among different images is a fundamental problem in computer vision. It is important to find correspondences correctly and rapidly, especially for real-time tracking systems. Therefore, the definition of search areas in images is crucial. Traditional epipolar constraint is not noise-enduring; some reformative methods lack explicit geometric meanings. All of them cannot help defining rational search areas under noises. This paper proposes two new binocular imaging constraints with clear geometric meanings and strong restraining forces. Based on them, a novel searching strategy among multiimages is developed which can define optimal search areas with smallest sizes but best reliability. Practical algorithms for implementation are presented and experiments with real images are performed, validating the effectiveness of the proposed strategy.

Collaboration


Dive into the Baopu Li's collaboration.

Top Co-Authors

Avatar

Max Q.-H. Meng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Max Q.-H. Meng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jingsheng Liao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Chao Hu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ning Wei

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Qing He

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ran Zhou

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ruyi Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Tianyu Yang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xiang Fang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge