ArXiv | 2021

Recurrent neural network transducer for Japanese and Chinese offline handwritten text recognition

 
 
 
 

Abstract


In this paper, we propose an RNN-Transducer model for recognizing Japanese and Chinese offline handwritten text line images. As far as we know, it is the first approach that adopts the RNN-Transducer model for offline handwritten text recognition. The proposed model consists of three main components: a visual feature encoder that extracts visual features from an input image by CNN and then encodes the visual features by BLSTM; a linguistic context encoder that extracts and encodes linguistic features from the input image by embedded layers and LSTM; and a joint decoder that combines and then decodes the visual features and the linguistic features into the final label sequence by fully connected and softmax layers. The proposed model takes advantage of both visual and linguistic information from the input image. In the experiments, we evaluated the performance of the proposed model on the two datasets: Kuzushiji and SCUTEPT. Experimental results show that the proposed model achieves state-of-theart performance on all datasets.

Volume abs/2106.14459
Pages None
DOI 10.1007/978-3-030-86159-9_26
Language English
Journal ArXiv

Full Text