ArXiv | 2021

LEGOEval: An Open-Source Toolkit for Dialogue System Evaluation via Crowdsourcing

 
 
 
 
 

Abstract


We present LEGOEval, an open-source toolkit that enables researchers to easily evaluate dialogue systems in a few lines of code using the online crowdsource platform, Amazon Mechanical Turk. Compared to existing toolkits, LEGOEval features a flexible task design by providing a Python API that maps to commonly used React.js interface components. Researchers can personalize their evaluation procedures easily with our built-in pages as if playing with LEGO blocks. Thus, LEGOEval provides a fast, consistent method for reproducing human evaluation results. Besides the flexible task design, LEGOEval also offers an easy API to review collected data.

Volume abs/2105.01992
Pages None
DOI 10.18653/v1/2021.acl-demo.38
Language English
Journal ArXiv

Full Text