IEEE Transactions on Neural Networks and Learning Systems | 2019

Nonlinear Dimensionality Reduction With Missing Data Using Parametric Multiple Imputations

 
 
 
 

Abstract


Dimensionality reduction (DR) aims at faithfully and meaningfully representing high-dimensional (HD) data into a low-dimensional (LD) space. Recently developed neighbor embedding DR methods lead to outstanding performances, thanks to their ability to foil the curse of dimensionality. Unfortunately, they cannot be directly employed on incomplete data sets, which become ubiquitous in machine learning. Discarding samples with missing features prevents their LD coordinates computation and deteriorates the complete samples treatment. Common missing data imputation schemes are not appropriate in the nonlinear DR context either. Indeed, even if they model the data distribution in the feature space, they can, at best, enable the application of a DR scheme on the expected data set. In practice, one would, instead, like to obtain the LD embedding with the closest cost function value on average with respect to the complete data case. As the state-of-the-art DR techniques are nonlinear, the latter embedding results from minimizing the expected cost function on the incomplete database, not from considering the expected data set. This paper addresses these limitations by developing a general methodology for nonlinear DR with missing data, being directly applicable with any DR scheme optimizing some criterion. In order to model the feature dependences, an HD extension of Gaussian mixture models is first fitted on the incomplete data set. It is afterward employed under the multiple imputation paradigms to obtain a single relevant LD embedding, thus minimizing the cost function expectation. Extensive experiments demonstrate the superiority of the suggested framework over alternative approaches.

Volume 30
Pages 1166-1179
DOI 10.1109/TNNLS.2018.2861891
Language English
Journal IEEE Transactions on Neural Networks and Learning Systems

Full Text