Skip to content

Latest commit

 

History

History
 
 

eng-zle

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

opus-2020-06-28.zip

  • dataset: opus
  • model: transformer
  • source language(s): eng
  • target language(s): bel bel_Latn orv_Cyrl rue rus ukr
  • model: transformer
  • pre-processing: normalization + SentencePiece (spm32k,spm32k)
  • a sentence initial language token is required in the form of >>id<< (id = valid target language ID)
  • download: opus-2020-06-28.zip
  • test set translations: opus-2020-06-28.test.txt
  • test set scores: opus-2020-06-28.eval.txt

Benchmarks

testset BLEU chr-F
newstest2012-engrus.eng.rus 25.6 0.533
newstest2013-engrus.eng.rus 20.0 0.480
newstest2015-enru-engrus.eng.rus 22.4 0.518
newstest2016-enru-engrus.eng.rus 21.7 0.505
newstest2017-enru-engrus.eng.rus 23.6 0.527
newstest2018-enru-engrus.eng.rus 20.9 0.513
newstest2019-enru-engrus.eng.rus 22.0 0.487
Tatoeba-test.eng-bel.eng.bel 20.5 0.468
Tatoeba-test.eng.multi 35.6 0.570
Tatoeba-test.eng-orv.eng.orv 0.4 0.140
Tatoeba-test.eng-rue.eng.rue 0.9 0.158
Tatoeba-test.eng-rus.eng.rus 38.8 0.598
Tatoeba-test.eng-ukr.eng.ukr 36.9 0.582

opus-2020-07-27.zip

  • dataset: opus
  • model: transformer
  • source language(s): eng
  • target language(s): bel bel_Latn orv_Cyrl rue rus ukr
  • model: transformer
  • pre-processing: normalization + SentencePiece (spm32k,spm32k)
  • a sentence initial language token is required in the form of >>id<< (id = valid target language ID)
  • download: opus-2020-07-27.zip
  • test set translations: opus-2020-07-27.test.txt
  • test set scores: opus-2020-07-27.eval.txt

Benchmarks

testset BLEU chr-F
newstest2012-engrus.eng.rus 25.9 0.535
newstest2013-engrus.eng.rus 20.0 0.480
newstest2015-enru-engrus.eng.rus 22.5 0.517
newstest2016-enru-engrus.eng.rus 21.6 0.506
newstest2017-enru-engrus.eng.rus 23.4 0.526
newstest2018-enru-engrus.eng.rus 20.8 0.512
newstest2019-enru-engrus.eng.rus 21.6 0.485
Tatoeba-test.eng-bel.eng.bel 20.9 0.464
Tatoeba-test.eng.multi 35.3 0.564
Tatoeba-test.eng-orv.eng.orv 0.5 0.134
Tatoeba-test.eng-rue.eng.rue 1.3 0.178
Tatoeba-test.eng-rus.eng.rus 39.0 0.596
Tatoeba-test.eng-ukr.eng.ukr 36.7 0.579

opus2m-2020-08-02.zip

  • dataset: opus2m
  • model: transformer
  • source language(s): eng
  • target language(s): bel bel_Latn orv_Cyrl rue rus ukr
  • model: transformer
  • pre-processing: normalization + SentencePiece (spm32k,spm32k)
  • a sentence initial language token is required in the form of >>id<< (id = valid target language ID)
  • download: opus2m-2020-08-02.zip
  • test set translations: opus2m-2020-08-02.test.txt
  • test set scores: opus2m-2020-08-02.eval.txt

Benchmarks

testset BLEU chr-F
newstest2012-engrus.eng.rus 27.4 0.550
newstest2013-engrus.eng.rus 21.4 0.493
newstest2015-enru-engrus.eng.rus 24.2 0.534
newstest2016-enru-engrus.eng.rus 23.3 0.518
newstest2017-enru-engrus.eng.rus 25.3 0.541
newstest2018-enru-engrus.eng.rus 22.4 0.527
newstest2019-enru-engrus.eng.rus 24.1 0.505
Tatoeba-test.eng-bel.eng.bel 20.8 0.471
Tatoeba-test.eng.multi 37.2 0.580
Tatoeba-test.eng-orv.eng.orv 0.6 0.130
Tatoeba-test.eng-rue.eng.rue 1.4 0.168
Tatoeba-test.eng-rus.eng.rus 41.3 0.616
Tatoeba-test.eng-ukr.eng.ukr 38.7 0.596