Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yichuan Tang is active.

Publication


Featured researches published by Yichuan Tang.


Science | 2012

A Large-Scale Model of the Functioning Brain

Chris Eliasmith; Terrence C. Stewart; Xuan Choo; Trevor Bekolay; Travis DeWolf; Yichuan Tang; Daniel Rasmussen

Modeling the Brain Neurons are pretty complicated cells. They display an endless variety of shapes that sprout highly variable numbers of axons and dendrites; they sport time- and voltage-dependent ion channels along with an impressive array of neurotransmitter receptors; and they connect intimately with near neighbors as well as former neighbors who have since moved away. Simulating a sizeable chunk of brain tissue has recently become achievable, thanks to advances in computer hardware and software. Eliasmith et al. (p. 1202; see the Perspective by Machens) present their million-neuron model of the brain and show that it can recognize numerals, remember lists of digits, and write down those lists—tasks that seem effortless for a human but that encompass the triad of perception, cognition, and behavior. Two-and-a-half million model neurons recognize images, learn via reinforcement, and display fluid intelligence. A central challenge for cognitive and systems neuroscience is to relate the incredibly complex behavior of animals to the equally complex activity of their brains. Recently described, large-scale neural models have not bridged this gap between neural activity and biological function. In this work, we present a 2.5-million-neuron model of the brain (called “Spaun”) that bridges this gap by exhibiting many different behaviors. The model is presented only with visual image sequences, and it draws all of its responses with a physically modeled arm. Although simplified, the model captures many aspects of neuroanatomy, neurophysiology, and psychological behavior, which we demonstrate via eight diverse tasks.


international conference on neural information processing | 2013

Challenges in Representation Learning: A Report on Three Machine Learning Contests

Ian J. Goodfellow; Dumitru Erhan; Pierre Carrier; Aaron C. Courville; Mehdi Mirza; Ben Hamner; Will Cukierski; Yichuan Tang; David Thaler; Dong-Hyun Lee; Yingbo Zhou; Chetan Ramaiah; Fangxiang Feng; Ruifan Li; Xiaojie Wang; Dimitris Athanasakis; John Shawe-Taylor; Maxim Milakov; John Park; Radu Tudor Ionescu; Marius Popescu; Cristian Grozea; James Bergstra; Jingjing Xie; Lukasz Romaszko; Bing Xu; Zhang Chuang; Yoshua Bengio

The ICML 2013 Workshop on Challenges in Representation Learning focused on three challenges: the black box learning challenge, the facial expression recognition challenge, and the multimodal learning challenge. We describe the datasets created for these challenges and summarize the results of the competitions. We provide suggestions for organizers of future challenges and some comments on what kind of knowledge can be gained from machine learning competitions.


computer vision and pattern recognition | 2012

Robust Boltzmann Machines for recognition and denoising

Yichuan Tang; Ruslan Salakhutdinov; Geoffrey E. Hinton

While Boltzmann Machines have been successful at unsupervised learning and density modeling of images and speech data, they can be very sensitive to noise in the data. In this paper, we introduce a novel model, the Robust Boltzmann Machine (RoBM), which allows Boltzmann Machines to be robust to corruptions. In the domain of visual recognition, the RoBM is able to accurately deal with occlusions and noise by using multiplicative gating to induce a scale mixture of Gaussians over pixels. Image denoising and in-painting correspond to posterior inference in the RoBM. Our model is trained in an unsupervised fashion with unlabeled noisy data and can learn the spatial structure of the occluders. Compared to standard algorithms, the RoBM is significantly better at recognition and denoising on several face databases.


Neural Networks | 2015

Challenges in representation learning

Ian J. Goodfellow; Dumitru Erhan; Pierre Carrier; Aaron C. Courville; Mehdi Mirza; Benjamin Hamner; William Cukierski; Yichuan Tang; David Thaler; Dong-Hyun Lee; Yingbo Zhou; Chetan Ramaiah; Fangxiang Feng; Ruifan Li; Xiaojie Wang; Dimitris Athanasakis; John Shawe-Taylor; Maxim Milakov; John Park; Radu Tudor Ionescu; Marius Popescu; Cristian Grozea; James Bergstra; Jingjing Xie; Lukasz Romaszko; Bing Xu; Zhang Chuang; Yoshua Bengio

The ICML 2013 Workshop on Challenges in Representation Learning(1) focused on three challenges: the black box learning challenge, the facial expression recognition challenge, and the multimodal learning challenge. We describe the datasets created for these challenges and summarize the results of the competitions. We provide suggestions for organizers of future challenges and some comments on what kind of knowledge can be gained from machine learning competitions.


arXiv: Learning | 2013

Deep Learning using Linear Support Vector Machines

Yichuan Tang


international conference on machine learning | 2010

Deep networks for robust visual recognition

Yichuan Tang; Chris Eliasmith


neural information processing systems | 2013

Learning Stochastic Feedforward Neural Networks

Yichuan Tang; Ruslan Salakhutdinov


neural information processing systems | 2014

Learning Generative Models with Visual Attention

Yichuan Tang; Nitish Srivastava; Ruslan Salakhutdinov


Cognitive Systems Research | 2011

A biologically realistic cleanup memory: Autoassociation in spiking neurons

Terrence C. Stewart; Yichuan Tang; Chris Eliasmith


international conference on machine learning | 2012

Deep Lambertian Networks

Yichuan Tang; Geoffrey E. Hinton; Ruslan Salakhutdinov

Collaboration


Dive into the Yichuan Tang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bing Xu

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Chetan Ramaiah

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Thaler

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge