Artificial intelligence in medicine | 2019

Compositional model based on factorial evolution for realizing multi-task learning in bacterial virulent protein prediction

 
 
 

Abstract


The ability of multitask learning promulgated its sovereignty in the machine learning field with the diversified application including but not limited to bioinformatics and pattern recognition. Bioinformatics provides a wide range of applications for Multitask Learning (MTL) methods. Identification of Bacterial virulent protein is one such application that helps in understanding the virulence mechanism for the design of drug and vaccine. However, the limiting factor in a reliable prediction model is the scarcity of the experimentally verified training data. To deal with, casting the problem in a Multitask Learning scenario, could be beneficial. Reusability of auxiliary data from related multiple domains in the prediction of target domain with limited labeled data is the primary objective of multitask learning model. Due to the amalgamation of multiple related data, it is possible that the probability distribution between the features tends to vary. Therefore, to deal with change amongst the feature distribution, this paper proposes a composite model for multitask learning framework which is based on two principles: discovering the shared parameters for identifying the relationships between tasks and common underlying representation of features amongst the related tasks. Through multi-kernel and factorial evolution, the proposed framework able to discover the shared kernel parameters and latent feature representation that is common amongst the tasks. To examine the benefits of the proposed model, an extensive experiment is performed on the freely available dataset at VirulentPred web server. Based on the results, we found that multitask learning model performs better than the conventional single task model. Additionally, our findings state that if the distribution between the tasks is high, then training the multiple models yield slightly better prediction. However, if the data distribution difference is low, multitask learning significantly outperforms the individual learning.

Volume 101
Pages \n 101757\n
DOI 10.1016/j.artmed.2019.101757
Language English
Journal Artificial intelligence in medicine

Full Text