Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining | 2021

A Transformer-based Framework for Multivariate Time Series Representation Learning

 
 
 
 
 

Abstract


We present a novel framework for multivariate time series representation learning based on the transformer encoder architecture. The framework includes an unsupervised pre-training scheme, which can offer substantial performance benefits over fully supervised learning on downstream tasks, both with but even without leveraging additional unlabeled data, i.e., by reusing the existing data samples. Evaluating our framework on several public multivariate time series datasets from various domains and with diverse characteristics, we demonstrate that it performs significantly better than the best currently available methods for regression and classification, even for datasets which consist of only a few hundred training samples. Given the pronounced interest in unsupervised learning for nearly all domains in the sciences and in industry, these findings represent an important landmark, presenting the first unsupervised method shown to push the limits of state-of-the-art performance for multivariate time series regression and classification.

Volume None
Pages None
DOI 10.1145/3447548.3467401
Language English
Journal Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining

Full Text