Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yinlin Li is active.

Publication


Featured researches published by Yinlin Li.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

Introducing Memory and Association Mechanism Into a Biologically Inspired Visual Model

Hong Qiao; Yinlin Li; Tang Tang; Peng Wang

A famous biologically inspired hierarchical model (HMAX model), which was proposed recently and corresponds to V1 to V4 of the ventral pathway in primate visual cortex, has been successfully applied to multiple visual recognition tasks. The model is able to achieve a set of position- and scale-tolerant recognition, which is a central problem in pattern recognition. In this paper, based on some other biological experimental evidence, we introduce the memory and association mechanism into the HMAX model. The main contributions of the work are: 1) mimicking the active memory and association mechanism and adding the top down adjustment to the HMAX model, which is the first try to add the active adjustment to this famous model and 2) from the perspective of information, algorithms based on the new model can reduce the computation storage and have a good recognition performance. The new model is also applied to object recognition processes. The primary experimental results show that our method is efficient with a much lower memory requirement.


IEEE Transactions on Systems, Man, and Cybernetics | 2015

Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment

Hong Qiao; Xuanyang Xi; Yinlin Li; Wei Wu; Fengfu Li

Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational methods.


IEEE Transactions on Systems, Man, and Cybernetics | 2016

Biologically Inspired Model for Visual Cognition Achieving Unsupervised Episodic and Semantic Feature Learning

Hong Qiao; Yinlin Li; Fengfu Li; Xuanyang Xi; Wei Wu

Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision - for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features - within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects - the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating - for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.


Neurocomputing | 2016

A biologically inspired model mimicking the memory and two distinct pathways of face perception

Xuanyang Xi; Peijie Yin; Hong Qiao; Yinlin Li; Wensen Feng

In this paper, we propose a face perception model to mimic the biological mechanism of face perception and memory in human brain. We are mainly inspired by the fact that there are two functionally and neurologically distinct pathways after the early face perception and they both interact to process the changeable features of faces. Accordingly, our model consists of three perception parts, facial structure perception, facial expression perception and facial identity perception, which are all component-based. The structure perception has a feed-forward projection to the expression and identity perception, while the expression affects the identity perception with a modulation process. We embody the three parts referring to three bio-inspired computational models. For the facial structure perception, we utilize a cascaded-CNN (convolutional neural networks) approach to estimate the center locations of key facial components. For the facial expression perception, we propose a novel approach which exploits convolutional deep belief networks (CDBN) to spontaneously locate the places containing the most discriminative information and synchronously complete the feature learning and feature selection. For the facial identity perception, we propose an approach which adopts the hierarchical max-pooling (HMAX) model to encode notable characteristics of facial components and utilizes a new memory formation integrating the preliminary decision, expression modulation and final decision process. We evaluate our model through a series of experiments and the experimental results demonstrate its rationality and effectiveness.


Frontiers in Computational Neuroscience | 2015

Enhanced HMAX model with feedforward feature learning for multiclass categorization

Yinlin Li; Wei Wu; Bo Zhang; Fengfu Li

In recent years, the interdisciplinary research between neuroscience and computer vision has promoted the development in both fields. Many biologically inspired visual models are proposed, and among them, the Hierarchical Max-pooling model (HMAX) is a feedforward model mimicking the structures and functions of V1 to posterior inferotemporal (PIT) layer of the primate visual cortex, which could generate a series of position- and scale- invariant features. However, it could be improved with attention modulation and memory processing, which are two important properties of the primate visual cortex. Thus, in this paper, based on recent biological research on the primate visual cortex, we still mimic the first 100–150 ms of visual cognition to enhance the HMAX model, which mainly focuses on the unsupervised feedforward feature learning process. The main modifications are as follows: (1) To mimic the attention modulation mechanism of V1 layer, a bottom-up saliency map is computed in the S1 layer of the HMAX model, which can support the initial feature extraction for memory processing; (2) To mimic the learning, clustering and short-term memory to long-term memory conversion abilities of V2 and IT, an unsupervised iterative clustering method is used to learn clusters with multiscale middle level patches, which are taken as long-term memory; (3) Inspired by the multiple feature encoding mode of the primate visual cortex, information including color, orientation, and spatial position are encoded in different layers of the HMAX model progressively. By adding a softmax layer at the top of the model, multiclass categorization experiments can be conducted, and the results on Caltech101 show that the enhanced model with a smaller memory size exhibits higher accuracy than the original HMAX model, and could also achieve better accuracy than other unsupervised feature learning methods in multiclass categorization task.


world congress on intelligent control and automation | 2016

Grasp type understanding — classification, localization and clustering

Yinlin Li; Yuren Zhang; Hong Qiao; Ken Chen; Xuanyang Xi

Prehensile analysis is a research field attracting multi-disciplinary interests, including computer science, mechanology and neuroscience. For robot, grasp type recognition provides critical information for human-robot interaction and robot self-learning. One of the research direction is to discover the common modes of human hand use with first-person point-of-view wearable cameras. In contrast to previous methods based on handcraft features and multi-stage pipeline, we use a convolutional neural network to learn discriminative features of grasp types automatically, which can also achieve grasp type localization and classification simultaneously in a single-stage pipeline. Furthermore, a clustering method is also proposed to find the hierarchical relationships between different grasp types. Experiments are conducted on UT Grasp dataset and Yale human grasping dataset. The proposed method shows better accuracy and higher efficiency than traditional methods.


Physical Review E | 2007

Crossover in the power-law behavior of confined energy in a composite granular chain.

Pan Wang; Junchao Xia; Yinlin Li; C. S. Liu


Physical Review B | 2008

Two-order-parameter description of liquid Al under five different pressures

Yinlin Li; Qing-Hai Hao; Qi-Long Cao; C. S. Liu


Physical Review E | 2008

Characterization of reflection intermittency in a composite granular chain

Pan Wang; Yinlin Li; Junchao Xia; C. S. Liu


international conference on mechatronics and automation | 2017

A hierarchical graph matching based key point correspondence method for large distance rover localization

Yinlin Li; Yuren Zhang; Chuankai Liu; Xu Yang; Hong Qiao

Collaboration


Dive into the Yinlin Li's collaboration.

Top Co-Authors

Avatar

Hong Qiao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Wei Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xuanyang Xi

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

C. S. Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Fengfu Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Junchao Xia

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Pan Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Peijie Yin

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yuren Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Bo Zhang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge