Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jianwei Lu is active.

Publication


Featured researches published by Jianwei Lu.


Journal of Vision | 2004

Perceptual consequences of feature-based attention

Jianwei Lu; Laurent Itti

Attention modulates visual processing along at least two dimensions: a spatial dimension, which enhances the representation of stimuli within the focus of attention, and a feature dimension, which is thought to enhance attended visual features (e.g., upward motion) throughout the visual field. We investigate the consequences of feature-based attention onto visual perception, using dual-task human psychophysics and two distant drifting Gabor stimuli to systematically explore 64 combinations of visual features (orientations and drift speeds) and tasks (discriminating orientation or drift speed). The resulting single, consistent data set suggests a functional model, which predicts a maximum rule by which only the dominant product of feature enhancement and feature benefit by feature relevance may benefit perception.


PLOS ONE | 2013

VEGF-Mediated Proliferation of Human Adipose Tissue-Derived Stem Cells

Guangfeng Chen; Xiujuan Shi; Chen Sun; Min Li; Qing Zhou; Chen Zhang; Jun Huang; Yu Qiu; Xiangyi Wen; Yan Zhang; Yushan Zhang; Shuzhang Yang; Lixia Lu; Jieping Zhang; Qionglan Yuan; Jianwei Lu; Guo-Tong Xu; Yunyun Xue; Zibing Jin; Cizhong Jiang; Ming Ying; Xiaoqing Liu

Human adipose tissue-derived stem cells (ADSCs) are an attractive multipotent stem cell source with therapeutic applicability across diverse fields for the repair and regeneration of acute and chronically damaged tissues. In recent years, there has been increasing interest in ADSC for tissue engineering applications. However, the mechanisms underlying the regulation of ADSC proliferation are not fully understood. Here we show that 47 transcripts are up-regulated while 23 are down-regulated in ADSC compared to terminally differentiated cells based on global mRNA profiling and microRNA profiling. Among the up-regulated genes, the expression of vascular endothelial growth factor (VEGF) is fine-tuned by miR-199a-5p. Further investigation indicates that VEGF accelerates ADSC proliferation whereas the multipotency of ADSC remains stable in terms of adipogenic, chondrogenic and osteogenic potentials after VEGF treatment, suggesting that VEGF may serve as an excellent supplement for accelerating ADSC proliferation during in vitro expansion.


biomedical engineering and informatics | 2010

iBrowse: Software for low vision to access Internet

Guobei Xiao; Guo-Tong Xu; Jianwei Lu

New user interface software called iBrowse was developed to help visually impaired people to access Internet. The software iBrowse, coded by Extensible Application Markup Language (XAML) and C#, adopted a similar designing strategy of our previously implemented LowBrowse software which acted as an extension add-on of Firefox browser. Our iBrowse software, instead of following the traditional magnification technique, allowed the low vision users to adjust a few style parameters (font size etc.) and then to read all the websites in their maximum reading efficiency regardless how web authors mark up their websites. The software iBrowse contains two separate frames including single-line reading frame and webpage global frame. Thus it provides the possibility for low vision users to read the text content on the webpage while at the same time appreciate webpages global layout, which the website authors intends to deliver. Our new software, iBrowse, combined with previous LowBrowse, would benefit millions of persons with impaired vision and enhance their web accessibility efficiently.


Journal of Vision | 2014

Feature-based attention is independent of object appearance

Guobei Xiao; Guo-Tong Xu; Xiaoqing Liu; J.-Y. Xu; Fang Wang; Li Li; Laurent Itti; Jianwei Lu

How attention interacts with low-level visual representations to give rise to perception remains a central yet controversial question in neuroscience. While several previous studies suggest that the units of attentional selection are individual objects, other evidence points instead toward lower-level features, such as an attended color or direction of motion. We used both human fMRI and psychophysics to investigate the relationship between object-based and feature-based attention. Specifically, we focused on whether feature-based attention is modulated by object appearance, comparing three conditions: (a) features appearing as one object; (b) features appearing as two separate but identical objects; (c) features appearing as two different objects. Stimuli were two random-dot fields presented bilaterally to central fixation, and object appearance was induced by the presence of one or two boxes around the fields. In the fMRI experiment, participants performed a luminance discrimination task on one side, and ignored the other side, where we probed for enhanced activity when either it was perceived as belonging to a same object, or shared features with the task side. In the psychophysical experiments, participants performed luminance discrimination on both sides with overlapping red and green dots, now attending to either the same features (red/red or green/green) or different features (red/green or green/red) on both sides. Results show that feature-based attentional enhancement exists in all three conditions, i.e., regardless whether features appear as one object, two identical objects, or two different objects. Our findings indicate that feature-based attention differs from object-based attention in that it is not dependent upon object appearance. Thus feature-based attention may be mediated by earlier cortical processes independent of perceiving visual features into well-formed objects.


international conference on neural information processing | 2017

Deep Salient Object Detection via Hierarchical Network Learning

Dandan Zhu; Ye Luo; Lei Dai; Xuan Shao; Laurent Itti; Jianwei Lu

Salient object detection is a fundamental problem in both pattern recognition and image processing tasks. Previous salient object detection algorithms usually involve various features based on priors/assumptions about the properties of the objects. Inspired by the effectiveness of recently developed feature learning, we propose a novel deep salient object detection (DSOD) model using the deep residual network (ResNet 152-layers) for saliency computation. In particular, we model the image saliency from both local and global perspectives. In the local feature estimation stage, we detect local saliency by using a deep residual network (ResNet-L) which learns local region features to determine the saliency value of each pixel. In the global feature extraction stage, another deep residual network (ResNet-G) is trained to predict the saliency score of each image based on the global features. The final saliency map is generated by a conditional random field (CRF) to combining the local and global-level saliency map. Our DSOD model is capable of uniformly highlighting the objects-of-interest from complex background while well preserving object details. Quantitative and qualitative experiments on three benchmark datasets demonstrate that our DSOD method outperforms state-of-the-art methods in the salient object detection.


international conference on neural information processing | 2017

MC-DCNN: Dilated Convolutional Neural Network for Computing Stereo Matching Cost

Xiao Liu; Ye Luo; Yu Ye; Jianwei Lu

Designing a model for computing better matching cost is a fundamental problem in stereo method. In this paper, we propose a novel convolutional neural network (CNN) architecture, which is called MC-DCNN, for computing matching cost of two image patches. By adding dilated convolution, our model gains a larger receptive field without adding parameters and losing resolution. We also concatenate the features of last three convolutional layers as a better descriptor that contains information of different image levels. The experimental results on Middlebury datasets validate that the proposed method outperforms the baseline CNN network on stereo matching problem, and especially performs well on weakly-textured areas, which is a shortcoming of traditional methods.


Journal of Electronic Imaging | 2017

Image salient object detection with refined deep features via convolution neural network

Dandan Zhu; Lei Dai; Xuan Shao; Qiangqiang Zhou; Laurent Itti; Ye Luo; Jianwei Lu

Abstract. Recent advances in saliency detection have used deep learning to obtain high-level features to detect salient regions. These advances have demonstrated superior results over previous works that use handcrafted low-level features for saliency detection. We propose a convolutional neural network (CNN) model to learn high-level features for saliency detection. Compared to other methods, our method presents two merits. First, when performing features extraction, apart from the convolution and pooling step in our method, we add restricted Boltzmann machine into the CNN framework to obtain more accurate features in intermediate step. Second, in order to avoid manual annotation data, we add deep belief network classifier at the end of this model to classify salient and nonsalient regions. Quantitative and qualitative experiments on three benchmark datasets demonstrate that our method performs favorably against the state-of-the-art methods.


Symmetry | 2018

Multi-Scale Adversarial Feature Learning for Saliency Detection

Dandan Zhu; Lei Dai; Ye Luo; Guokai Zhang; Xuan Shao; Laurent Itti; Jianwei Lu

Previous saliency detection methods usually focused on extracting powerful discriminative features to describe images with a complex background. Recently, the generative adversarial network (GAN) has shown a great ability in feature learning for synthesizing high quality natural images. Since the GAN shows a superior feature learning ability, we present a new multi-scale adversarial feature learning (MAFL) model for image saliency detection. In particular, we build this model, which is composed of two convolutional neural network (CNN) modules: the multi-scale G-network takes natural images as inputs and generates the corresponding synthetic saliency map, and we design a novel layer in the D-network, namely a correlation layer, which is used to determine whether one image is a synthetic saliency map or ground-truth saliency map. Quantitative and qualitative comparisons on several public datasets show the superiority of our approach.


Symmetry | 2018

3D Spatial Pyramid Dilated Network for Pulmonary Nodule Classification

Guokai Zhang; Xiao Liu; Dandan Zhu; Pengcheng He; Lipeng Liang; Ye Luo; Jianwei Lu

Lung cancer mortality is currently the highest among all kinds of fatal cancers. With the help of computer-aided detection systems, a timely detection of malignant pulmonary nodule at early stage could improve the patient survival rate efficiently. However, the sizes of the pulmonary nodules are usually various, and it is more difficult to detect small diameter nodules. The traditional convolution neural network uses pooling layers to reduce the resolution progressively, but it hampers the network’s ability to capture the tiny but vital features of the pulmonary nodules. To tackle this problem, we propose a novel 3D spatial pyramid dilated convolution network to classify the malignancy of the pulmonary nodules. Instead of using the pooling layers, we use 3D dilated convolution to learn the detailed characteristic information of the pulmonary nodules. Furthermore, we show that the fusion of multiple receptive fields from different dilated convolutions could further improve the classification performance of the model. Extensive experimental results demonstrate that our model achieves a better result with an accuracy of 88.6%, which outperforms other state-of-theart methods.


Symmetry | 2018

An Adversarial and Densely Dilated Network for Connectomes Segmentation

Ke Chen; Dandan Zhu; Jianwei Lu; Ye Luo

Automatic reconstructing of neural circuits in the brain is one of the most crucial studies in neuroscience. Connectomes segmentation plays an important role in reconstruction from electron microscopy (EM) images; however, it is rather challenging due to highly anisotropic shapes with inferior quality and various thickness. In our paper, we propose a novel connectomes segmentation framework called adversarial and densely dilated network (ADDN) to address these issues. ADDN is based on the conditional Generative Adversarial Network (cGAN) structure which is the latest advance in machine learning with power to generate images similar to the ground truth especially when the training data is limited. Specifically, we design densely dilated network (DDN) as the segmentor to allow a deeper architecture and larger receptive fields for more accurate segmentation. Discriminator is trained to distinguish generated segmentation from manual segmentation. During training, such adversarial loss function is optimized together with dice loss. Extensive experimental results demonstrate that our ADDN is effective for such connectomes segmentation task, helping to retrieve more accurate segmentation and attenuate the blurry effects of generated boundary map. Our method obtains state-of-the-art performance while requiring less computation on ISBI 2012 EM dataset and mouse piriform cortex dataset.

Collaboration


Dive into the Jianwei Lu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laurent Itti

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge