Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bai Ying Lei is active.

Publication


Featured researches published by Bai Ying Lei.


Expert Systems With Applications | 2014

Reversible watermarking scheme for medical image based on differential evolution

Bai Ying Lei; Ee-Leng Tan; Siping Chen; Dong Ni; Tianfu Wang; Haijun Lei

A reversible watermarking method is proposed with wavelet transforms and SVD.Signature and logo data are inserted by recursive dither modulation algorithm.DE is explored to design the quantization steps optimally.Good balance of imperceptibility, robustness and capacity is obtained by DE.Experiments show good performance and outperform the related algorithms. Currently, most medical images are stored and exchanged with little or no security; hence it is important to provide protection for the intellectual property of these images in a secured environment. In this paper, a new and reversible watermarking method is proposed to address this security issue. Specifically, signature information and textual data are inserted into the original medical images based on recursive dither modulation (RDM) algorithm after wavelet transform and singular value decomposition (SVD). In addition, differential evolution (DE) is applied to design the quantization steps (QSs) optimally for controlling the strength of the watermark. Using these specially designed hybrid techniques, the proposed watermarking technique obtains good imperceptibility and high robustness. Experimental results indicate that the proposed method is not only highly competitive, but also outperforms the existing methods.


IEEE Transactions on Biomedical Engineering | 2015

Accurate Segmentation of Cervical Cytoplasm and Nuclei Based on Multiscale Convolutional Network and Graph Partitioning

Youyi Song; Ling Zhang; Siping Chen; Dong Ni; Bai Ying Lei; Tianfu Wang

In this paper, a multiscale convolutional network (MSCN) and graph-partitioning-based method is proposed for accurate segmentation of cervical cytoplasm and nuclei. Specifically, deep learning via the MSCN is explored to extract scale invariant features, and then, segment regions centered at each pixel. The coarse segmentation is refined by an automated graph partitioning method based on the pretrained feature. The texture, shape, and contextual information of the target objects are learned to localize the appearance of distinctive boundary, which is also explored to generate markers to split the touching nuclei. For further refinement of the segmentation, a coarse-to-fine nucleus segmentation framework is developed. The computational complexity of the segmentation is reduced by using superpixel instead of raw pixels. Extensive experimental results demonstrate that the proposed cervical nucleus cell segmentation delivers promising results and outperforms existing methods.


IEEE Transactions on Medical Imaging | 2017

Accurate Cervical Cell Segmentation from Overlapping Clumps in Pap Smear Images

Youyi Song; Ee-Leng Tan; Xudong Jiang; Jie-Zhi Cheng; Dong Ni; Siping Chen; Bai Ying Lei; Tianfu Wang

Accurate segmentation of cervical cells in Pap smear images is an important step in automatic pre-cancer identification in the uterine cervix. One of the major segmentation challenges is overlapping of cytoplasm, which has not been well-addressed in previous studies. To tackle the overlapping issue, this paper proposes a learning-based method with robust shape priors to segment individual cell in Pap smear images to support automatic monitoring of changes in cells, which is a vital prerequisite of early detection of cervical cancer. We define this splitting problem as a discrete labeling task for multiple cells with a suitable cost function. The labeling results are then fed into our dynamic multi-template deformation model for further boundary refinement. Multi-scale deep convolutional networks are adopted to learn the diverse cell appearance features. We also incorporated high-level shape information to guide segmentation where cell boundary might be weak or lost due to cell overlapping. An evaluation carried out using two different datasets demonstrates the superiority of our proposed method over the state-of-the-art methods in terms of segmentation accuracy.Accurate segmentation of cervical cells in Pap smear images is an important step in automatic pre-cancer identification in the uterine cervix. One of the major segmentation challenges is overlapping of cytoplasm, which has not been well-addressed in previous studies. To tackle the overlapping issue, this paper proposes a learning-based method with robust shape priors to segment individual cell in Pap smear images to support automatic monitoring of changes in cells, which is a vital prerequisite of early detection of cervical cancer. We define this splitting problem as a discrete labeling task for multiple cells with a suitable cost function. The labeling results are then fed into our dynamic multi-template deformation model for further boundary refinement. Multi-scale deep convolutional networks are adopted to learn the diverse cell appearance features. We also incorporated high-level shape information to guide segmentation where cell boundary might be weak or lost due to cell overlapping. An evaluation carried out using two different datasets demonstrates the superiority of our proposed method over the state-of-the-art methods in terms of segmentation accuracy.


IEEE Transactions on Medical Imaging | 2017

Automatic Scoring of Multiple Semantic Attributes With Multi-Task Feature Leverage: A Study on Pulmonary Nodules in CT Images

Sihong Chen; Jing Qin; Xing Ji; Bai Ying Lei; Tianfu Wang; Dong Ni; Jie-Zhi Cheng

The gap between the computational and semantic features is the one of major factors that bottlenecks the computer-aided diagnosis (CAD) performance from clinical usage. To bridge this gap, we exploit three multi-task learning (MTL) schemes to leverage heterogeneous computational features derived from deep learning models of stacked denoising autoencoder (SDAE) and convolutional neural network (CNN), as well as hand-crafted Haar-like and HoG features, for the description of 9 semantic features for lung nodules in CT images. We regard that there may exist relations among the semantic features of “spiculation”, “texture”, “margin”, etc., that can be explored with the MTL. The Lung Image Database Consortium (LIDC) data is adopted in this study for the rich annotation resources. The LIDC nodules were quantitatively scored w.r.t. 9 semantic features from 12 radiologists of several institutes in U.S.A. By treating each semantic feature as an individual task, the MTL schemes select and map the heterogeneous computational features toward the radiologists’ ratings with cross validation evaluation schemes on the randomly selected 2400 nodules from the LIDC dataset. The experimental results suggest that the predicted semantic scores from the three MTL schemes are closer to the radiologists’ ratings than the scores from single-task LASSO and elastic net regression methods. The proposed semantic attribute scoring scheme may provide richer quantitative assessments of nodules for better support of diagnostic decision and management. Meanwhile, the capability of the automatic association of medical image contents with the clinical semantic terms by our method may also assist the development of medical search engine.


medical image computing and computer assisted intervention | 2016

Bridging Computational Features Toward Multiple Semantic Features with Multi-task Regression: A Study of CT Pulmonary Nodules

Sihong Chen; Dong Ni; Jing Qin; Bai Ying Lei; Tianfu Wang; Jie-Zhi Cheng

The gap between the computational and semantic features is the one of major factors that bottlenecks the computer-aided diagnosis (CAD) performance from clinical usage. To bridge such gap, we propose to utilize the multi-task regression (MTR) scheme that leverages heterogeneous computational features derived from deep learning models of stacked denoising autoencoder (SDAE) and convolutional neural network (CNN) as well as Haar-like features to approach 8 semantic features of lung CT nodules. We regard that there may exist relations among the semantic features of “spiculation”, “texture”, “margin”, etc., that can be exploited with the multi-task learning technique. The Lung Imaging Database Consortium (LIDC) data is adopted for the rich annotations, where nodules were quantitatively rated for the semantic features from many radiologists. By treating each semantic feature as a task, the MTR selects and regresses the heterogeneous computational features toward the radiologists’ ratings with 10 fold cross-validation evaluation on the randomly selected LIDC 1400 nodules. The experimental results suggest that the predicted semantic scores from MTR are closer to the radiologists’ rating than the predicted scores from single-task LASSO and elastic net regression methods. The proposed semantic scoring scheme may provide richer quantitative assessments of nodules for deeper analysis and support more sophisticated clinical content retrieval in medical databases.


Pattern Recognition | 2015

Saliency-driven image classification method based on histogram mining and image score

Bai Ying Lei; Ee-Leng Tan; Siping Chen; Dong Ni; Tianfu Wang

Since most image classification tasks involve discriminative information (i.e., saliency), this paper proposes a new bag-of-phrase (BoP) approach to incorporate this information. Specifically, saliency map and local features are first extracted from edge-based dense descriptors. These features are represented by histogram and mined with discriminative learning technique. Image score calculated from the saliency map is also investigated to optimize a support vector machine (SVM) classifier. Both feature map and kernel trick methods are explored to enhance the accuracy of the SVM classifier. In addition, novel inter- and intra-class histogram normalization methods are investigated to further boost the performance of the proposed method. Experiments using several publicly available benchmark datasets demonstrate that the proposed method achieves promising classification accuracy and superior performance over state-of-the-art methods. A new saliency-driven bag-of-phrase approach for image classification is proposed.Edge-based dense descriptor is applied.Histogram mining and discriminative learning are investigated.Image score is adopted as latent information to optimize linear classifier.Novel inter- and intra-class histogram normalization methods are explored.


Signal Processing | 2015

Optimal and secure audio watermarking scheme based on self-adaptive particle swarm optimization and quaternion wavelet transform

Bai Ying Lei; Feng Zhou; Ee-Leng Tan; Dong Ni; Haijun Lei; Siping Chen; Tianfu Wang

In this paper, a new audio watermarking scheme based on self-adaptive particle swarm optimization (SAPSO) and quaternion wavelet transform (QWT) is proposed. By obtaining optimal watermark strength using a uniquely designed objective function, SAPSO addresses the conflicting problem of robustness, imperceptibility, and capacity of audio watermarking scheme using self-adjusted parameters. To withstand de-synchronization attack, a synchronization sequence generated by chaotic signals is also adopted in our scheme. Furthermore, the utilization of chaotic signals significantly enhances the security of the proposed scheme. The experimental results validate that our scheme is not only robust against de-synchronization attack, but also typical signal manipulations and StirMark attack. Our comparative analysis also revealed that the proposed scheme outperforms the state-of-the-arts audio watermarking schemes. A new audio watermarking scheme based on quaternion wavelet transform is proposed.Self-adaptive particle swarm optimization is developed to optimize the parameters.Synchronization code is inserted to withstand de-synchronization attacks.The security is enhanced by chaotic maps.The proposed method is very robust to resampling and cropping attacks.


IEEE Transactions on Systems, Man, and Cybernetics | 2017

Relational-Regularized Discriminative Sparse Learning for Alzheimer’s Disease Diagnosis

Bai Ying Lei; Peng Yang; Tianfu Wang; Siping Chen; Dong Ni

Accurate identification and understanding informative feature is important for early Alzheimer’s disease (AD) prognosis and diagnosis. In this paper, we propose a novel discriminative sparse learning method with relational regularization to jointly predict the clinical score and classify AD disease stages using multimodal features. Specifically, we apply a discriminative learning technique to expand the class-specific difference and include geometric information for effective feature selection. In addition, two kind of relational information are incorporated to explore the intrinsic relationships among features and training subjects in terms of similarity learning. We map the original feature into the target space to identify the informative and predictive features by sparse learning technique. A unique loss function is designed to include both discriminative learning and relational regularization methods. Experimental results based on a total of 805 subjects [including 226 AD patients, 393 mild cognitive impairment (MCI) subjects, and 186 normal controls (NCs)] from AD neuroimaging initiative database show that the proposed method can obtain a classification accuracy of 94.68% for AD versus NC, 80.32% for MCI versus NC, and 74.58% for progressive MCI versus stable MCI, respectively. In addition, we achieve remarkable performance for the clinical scores prediction and classification label identification, which has efficacy for AD disease diagnosis and prognosis. The algorithm comparison demonstrates the effectiveness of the introduced learning techniques and superiority over the state-of-the-arts methods.


IEEE Transactions on Systems, Man, and Cybernetics | 2017

FUIQA: Fetal Ultrasound Image Quality Assessment With Deep Convolutional Networks

Lingyun Wu; Jie-Zhi Cheng; Shengli Li; Bai Ying Lei; Tianfu Wang; Dong Ni

The quality of ultrasound (US) images for the obstetric examination is crucial for accurate biometric measurement. However, manual quality control is a labor intensive process and often impractical in a clinical setting. To improve the efficiency of examination and alleviate the measurement error caused by improper US scanning operation and slice selection, a computerized fetal US image quality assessment (FUIQA) scheme is proposed to assist the implementation of US image quality control in the clinical obstetric examination. The proposed FUIQA is realized with two deep convolutional neural network models, which are denoted as L-CNN and C-CNN, respectively. The L-CNN aims to find the region of interest (ROI) of the fetal abdominal region in the US image. Based on the ROI found by the L-CNN, the C-CNN evaluates the image quality by assessing the goodness of depiction for the key structures of stomach bubble and umbilical vein. To further boost the performance of the L-CNN, we augment the input sources of the neural network with the local phase features along with the original US data. It will be shown that the heterogeneous input sources will help to improve the performance of the L-CNN. The performance of the proposed FUIQA is compared with the subjective image quality evaluation results from three medical doctors. With comprehensive experiments, it will be illustrated that the computerized assessment with our FUIQA scheme can be comparable to the subjective ratings from medical doctors.


international conference on machine learning | 2015

Joint Learning of Multiple Longitudinal Prediction Models by Exploring Internal Relations

Bai Ying Lei; Siping Chen; Dong Ni; Tianfu Wang

Longitudinal prediction of the brain disorder such as Alzheimers disease AD is important for possible early detection and early intervention. Given the baseline imaging and clinical data, it will be interesting to predict the progress of disease for an individual subject, such as predicting the conversion of Mild Cognitive Impairment MCI to AD, in the future years. Most existing methods predicted different clinical scores using different models, or predicted multiple scores at different future time points separately. This often misses the chance of coordinated learning of multiple prediction models for jointly predicting multiple clinical scores at multiple future time points. In this paper, we propose a novel method for joint learning of multiple longitudinal prediction models for multiple clinical scores at multiple future time points. First, for each longitudinal prediction model, we explore three important relationships among training samples, features, and clinical scores, respectively, for enhancing its learning. Then, we further introduce additional relation among different longitudinal prediction models for allowing them to select a common set of features from the baseline imaging and clinical data, with l2,1 sparsity constraint, for their joint training. We evaluate the performance of our joint prediction models with the data from the Alzheimers Disease Neuroimaging Initiative ADNI database, showing much better performance than the state-of-the-art methods in predicting multiple clinical scores at multiple future time points.

Collaboration


Dive into the Bai Ying Lei's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ee-Leng Tan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xudong Jiang

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge