Tamizhi-Net OCR: Creating A Quality Large Scale Tamil-Sinhala-English Parallel Corpus Using Deep Learning Based Printed Character Recognition (PCR)
- Project Mentor
- Dr.Uthayasanker Thayasivam
- Contributor
- Charangan Vasantharajan
This research is about developing a simple, and automatic OCR engine that can extract text from documents (with legacy fonts usage and printer-friendly encoding which are not optimized for text extraction) to create a parallel corpus.
For this purpose, we enhanced the performance of Tesseract 4.1.1 by employing LSTM-based training on many legacy fonts to recognize printed characters in the above languages. Especially, our model detects code-mix text, numbers, and special characters from the printed document.
This project consists of the following.
- Dataset
- Model Training
- Model
- Improvements
- Corpus Creation
We created box files with coordinates specification, and then, we rectified misidentified characters, adjusted letter tracking, or spacing between characters to eliminate bounding box overlapping issues using jTessBoxEditor.
The following instructions will guide to generate TIFF/Box files.
tesstrain.sh --fonts_dir data/fonts \
--fontlist \
--lang tam \
--linedata_only \
--noextract_font_properties \
--training_text data/langdata/tam/tam.training_text \
--langdata_dir data/langdata \
--tessdata_dir data/tessdata \
--save_box_tiff \
--maxpages 100 \
--output_dir data/output
The table illustrates the command line flags used during the training. We have finalized the below numbers after conducting several experiments with different values.
Flag | Value |
---|---|
traineddata | path of traineddata file that contains the unicharset, word dawg, punctuation pattern dawg, number dawg |
model_output | path of output model files / checkpoints |
learning_rate | 1e-05 |
max_iterations | 5000 |
target_error_rate | 0.001 |
continue_from | path to previous checkpoint from which to continue training. |
stop_training | convert the training checkpoint to full traineddata. |
train_listfile | filename of a file listing training data files. |
eval_listfile | filename of a file listing evaluating data files. |
The following instructions will guide to start training.
OMP_THREAD_LIMIT=8 lstmtraining \
--continue_from data/model/tam.lstm \
--model_output data/finetuned_model/ \
--traineddata data/tessdata/tam.traineddata \
--train_listfile data/output/tam.training_files.txt \
--eval_listfile data/output/tam.training_files.txt \
--max_iterations 5000
The architecture of PCR is shown below. As the first step, we detect the file type and convert it to images if the input file is PDF. Then images are binarized and then image character boundary detection techniques are applied to find character boxes. Finally, deep learning modules identify word and line boundaries first then the characters are recognized. Finally using a language model, post-processing the file.
We compared the extracted text using our Tamizhi-Net Model with existing Tesseract below.
Tamil
Sinhala
To create a parallel corpus, we used www.parliament.lk website to download the required PDFs of all three languages and feed them into our model to get extracted texts.
@misc{vasantharajan2021tamizhinet,
title={Tamizhi-Net OCR: Creating A Quality Large Scale Tamil-Sinhala-English Parallel Corpus Using Deep Learning Based Printed Character Recognition (PCR)},
author={Charangan Vasantharajan and Uthayasanker Thayasivam},
year={2021},
eprint={2109.05952},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Apache License 2.0
Please read our code of conduct document here.