Archive | 2021

Beyond supervised learning

 

Abstract


Supervised learning is a powerful technique to build models that can associate data to given targets. Today this is a successful method that is widely adopted in the industry. However, supervised learning comes with limitations: It relies on costly, timeconsuming and error-prone manual labeling of a large set of examples. Moreover, animals do not seem to learn about objects through a teacher during their lifetime. It seems possible that much of the learning occurs in an unsupervised manner. I will illustrate two general ideas that show a path towards learning without annotation: self-supervised learning and unsupervised disentangling of factors of variations. Selfsupervised learning has emerged as a successful method to learn useful features without manual labeling. It exploits the inherent structure of the input data through so-called pretext tasks. I will give an overview of methods in the literature and introduce some state of the art methods developed in our group. In this context I will also present a method that allows to distill learned pretext tasks into a dataset of samples and pseudo-labels. For the first time it is then possible to compare handcrafted features, such as HOG, to other learned features in a common framework. I will also discuss a second approach to unsupervised learning, which aims at disentangling the main factors of variation of data. The idea behind this approach is to automatically cluster variability in the data that can be represented by the same attribute. For example, if data consists of faces, possible factors of variation are the gender, the hair style, the presence of glasses, beard, hats and scarfs, the pose, the expression, the skin color, and so on. I will present several techniques in the literature and from our group that allow identifying such factors in an unsupervised manner or with weak labels.

Volume None
Pages None
DOI 10.1142/9789811237461_0015
Language English
Journal None

Full Text