ArXiv | 2021

An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models

 
 

Abstract


The performance of fine-tuning pre-trained language models largely depends on the hyperparameter configuration. In this paper, we investigate the performance of modern hyperparameter optimization methods (HPO) on finetuning pre-trained language models. First, we study and report three HPO algorithms’ performances on fine-tuning two state-of-the-art language models on the GLUE dataset. We find that using the same time budget, HPO often fails to outperform grid search due to two reasons: insufficient time budget and overfitting. We propose two general strategies and an experimental procedure to systematically troubleshoot HPO’s failure cases. By applying the procedure, we observe that HPO can succeed with more appropriate settings in the search space and time budget; however, in certain cases overfitting remains. Finally, we make suggestions for future work. Our implementation can be found in https://github.c om/microsoft/FLAML/tree/main/flaml

Volume abs/2106.09204
Pages None
DOI 10.18653/v1/2021.acl-long.178
Language English
Journal ArXiv

Full Text