Skip to content

Latest commit

 

History

History
 
 

alv-eng

opus-2020-07-04.zip

  • dataset: opus
  • model: transformer
  • source language(s): ewe fuc fuv ibo kdx kin lin lug nya run sag sna toi_Latn tso umb wol xho yor zul
  • target language(s): eng
  • model: transformer
  • pre-processing: normalization + SentencePiece (spm32k,spm32k)
  • download: opus-2020-07-04.zip
  • test set translations: opus-2020-07-04.test.txt
  • test set scores: opus-2020-07-04.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.ewe-eng.ewe.eng 5.9 0.309
Tatoeba-test.ful-eng.ful.eng 1.2 0.119
Tatoeba-test.ibo-eng.ibo.eng 6.6 0.176
Tatoeba-test.kdx-eng.kdx.eng 4.0 0.217
Tatoeba-test.kin-eng.kin.eng 28.5 0.434
Tatoeba-test.lin-eng.lin.eng 3.3 0.197
Tatoeba-test.lug-eng.lug.eng 5.8 0.257
Tatoeba-test.multi.eng 24.7 0.409
Tatoeba-test.nya-eng.nya.eng 34.3 0.492
Tatoeba-test.run-eng.run.eng 25.8 0.418
Tatoeba-test.sag-eng.sag.eng 2.6 0.200
Tatoeba-test.sna-eng.sna.eng 17.3 0.361
Tatoeba-test.toi-eng.toi.eng 5.3 0.120
Tatoeba-test.tso-eng.tso.eng 55.0 0.652
Tatoeba-test.umb-eng.umb.eng 6.3 0.209
Tatoeba-test.wol-eng.wol.eng 9.9 0.213
Tatoeba-test.xho-eng.xho.eng 35.2 0.521
Tatoeba-test.yor-eng.yor.eng 21.7 0.360
Tatoeba-test.zul-eng.zul.eng 48.2 0.613

opus-2020-07-14.zip

  • dataset: opus
  • model: transformer
  • source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
  • target language(s): eng
  • model: transformer
  • pre-processing: normalization + SentencePiece (spm32k,spm32k)
  • download: opus-2020-07-14.zip
  • test set translations: opus-2020-07-14.test.txt
  • test set scores: opus-2020-07-14.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.ewe-eng.ewe.eng 11.3 0.298
Tatoeba-test.ful-eng.ful.eng 0.5 0.114
Tatoeba-test.ibo-eng.ibo.eng 2.1 0.173
Tatoeba-test.kin-eng.kin.eng 38.9 0.501
Tatoeba-test.lin-eng.lin.eng 1.7 0.189
Tatoeba-test.lug-eng.lug.eng 4.2 0.136
Tatoeba-test.multi.eng 21.5 0.378
Tatoeba-test.nya-eng.nya.eng 32.7 0.475
Tatoeba-test.run-eng.run.eng 25.7 0.424
Tatoeba-test.sag-eng.sag.eng 1.0 0.111
Tatoeba-test.sna-eng.sna.eng 20.6 0.394
Tatoeba-test.toi-eng.toi.eng 5.8 0.142
Tatoeba-test.tso-eng.tso.eng 100.0 1.000
Tatoeba-test.umb-eng.umb.eng 5.3 0.210
Tatoeba-test.wol-eng.wol.eng 11.7 0.248
Tatoeba-test.xho-eng.xho.eng 36.4 0.552
Tatoeba-test.yor-eng.yor.eng 29.7 0.419
Tatoeba-test.zul-eng.zul.eng 41.6 0.571

opus-2020-07-19.zip

  • dataset: opus
  • model: transformer
  • source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
  • target language(s): eng
  • model: transformer
  • pre-processing: normalization + SentencePiece (spm32k,spm32k)
  • download: opus-2020-07-19.zip
  • test set translations: opus-2020-07-19.test.txt
  • test set scores: opus-2020-07-19.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.ewe-eng.ewe.eng 10.1 0.349
Tatoeba-test.ful-eng.ful.eng 0.9 0.098
Tatoeba-test.ibo-eng.ibo.eng 3.5 0.190
Tatoeba-test.kin-eng.kin.eng 35.6 0.499
Tatoeba-test.lin-eng.lin.eng 1.4 0.178
Tatoeba-test.lug-eng.lug.eng 3.9 0.150
Tatoeba-test.multi.eng 20.8 0.375
Tatoeba-test.nya-eng.nya.eng 34.3 0.460
Tatoeba-test.run-eng.run.eng 24.6 0.419
Tatoeba-test.sag-eng.sag.eng 1.1 0.161
Tatoeba-test.sna-eng.sna.eng 16.8 0.385
Tatoeba-test.swa-eng.swa.eng 4.7 0.193
Tatoeba-test.toi-eng.toi.eng 3.9 0.160
Tatoeba-test.tso-eng.tso.eng 100.0 1.000
Tatoeba-test.umb-eng.umb.eng 4.9 0.218
Tatoeba-test.wol-eng.wol.eng 7.3 0.246
Tatoeba-test.xho-eng.xho.eng 37.6 0.549
Tatoeba-test.yor-eng.yor.eng 33.5 0.446
Tatoeba-test.zul-eng.zul.eng 43.2 0.563

opus-2020-07-26.zip

  • dataset: opus
  • model: transformer
  • source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
  • target language(s): eng
  • model: transformer
  • pre-processing: normalization + SentencePiece (spm32k,spm32k)
  • download: opus-2020-07-26.zip
  • test set translations: opus-2020-07-26.test.txt
  • test set scores: opus-2020-07-26.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.ewe-eng.ewe.eng 9.2 0.373
Tatoeba-test.ful-eng.ful.eng 0.5 0.100
Tatoeba-test.ibo-eng.ibo.eng 2.8 0.179
Tatoeba-test.kin-eng.kin.eng 31.5 0.489
Tatoeba-test.lin-eng.lin.eng 2.1 0.188
Tatoeba-test.lug-eng.lug.eng 3.6 0.142
Tatoeba-test.multi.eng 20.6 0.371
Tatoeba-test.nya-eng.nya.eng 37.2 0.478
Tatoeba-test.run-eng.run.eng 24.7 0.414
Tatoeba-test.sag-eng.sag.eng 2.7 0.169
Tatoeba-test.sna-eng.sna.eng 23.1 0.399
Tatoeba-test.swa-eng.swa.eng 4.8 0.195
Tatoeba-test.toi-eng.toi.eng 13.1 0.301
Tatoeba-test.tso-eng.tso.eng 100.0 1.000
Tatoeba-test.umb-eng.umb.eng 5.3 0.196
Tatoeba-test.wol-eng.wol.eng 5.7 0.231
Tatoeba-test.xho-eng.xho.eng 36.5 0.536
Tatoeba-test.yor-eng.yor.eng 30.1 0.430
Tatoeba-test.zul-eng.zul.eng 39.6 0.553

opus2m-2020-07-31.zip

  • dataset: opus2m
  • model: transformer
  • source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
  • target language(s): eng
  • model: transformer
  • pre-processing: normalization + SentencePiece (spm32k,spm32k)
  • download: opus2m-2020-07-31.zip
  • test set translations: opus2m-2020-07-31.test.txt
  • test set scores: opus2m-2020-07-31.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.ewe-eng.ewe.eng 6.3 0.328
Tatoeba-test.ful-eng.ful.eng 0.4 0.108
Tatoeba-test.ibo-eng.ibo.eng 4.5 0.196
Tatoeba-test.kin-eng.kin.eng 30.7 0.511
Tatoeba-test.lin-eng.lin.eng 2.8 0.213
Tatoeba-test.lug-eng.lug.eng 3.4 0.140
Tatoeba-test.multi.eng 20.9 0.376
Tatoeba-test.nya-eng.nya.eng 38.7 0.492
Tatoeba-test.run-eng.run.eng 24.5 0.417
Tatoeba-test.sag-eng.sag.eng 5.5 0.177
Tatoeba-test.sna-eng.sna.eng 26.9 0.412
Tatoeba-test.swa-eng.swa.eng 4.9 0.196
Tatoeba-test.toi-eng.toi.eng 3.9 0.147
Tatoeba-test.tso-eng.tso.eng 76.7 0.957
Tatoeba-test.umb-eng.umb.eng 4.0 0.195
Tatoeba-test.wol-eng.wol.eng 3.7 0.170
Tatoeba-test.xho-eng.xho.eng 38.9 0.556
Tatoeba-test.yor-eng.yor.eng 25.1 0.412
Tatoeba-test.zul-eng.zul.eng 46.1 0.623

opus4m-2020-08-12.zip

  • dataset: opus4m
  • model: transformer
  • source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
  • target language(s): eng
  • model: transformer
  • pre-processing: normalization + SentencePiece (spm32k,spm32k)
  • download: opus4m-2020-08-12.zip
  • test set translations: opus4m-2020-08-12.test.txt
  • test set scores: opus4m-2020-08-12.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.ewe-eng.ewe.eng 6.9 0.268
Tatoeba-test.ful-eng.ful.eng 0.9 0.119
Tatoeba-test.ibo-eng.ibo.eng 5.0 0.171
Tatoeba-test.kin-eng.kin.eng 38.4 0.545
Tatoeba-test.lin-eng.lin.eng 6.8 0.222
Tatoeba-test.lug-eng.lug.eng 4.5 0.218
Tatoeba-test.multi.eng 21.3 0.382
Tatoeba-test.nya-eng.nya.eng 32.1 0.508
Tatoeba-test.run-eng.run.eng 25.5 0.427
Tatoeba-test.sag-eng.sag.eng 1.3 0.122
Tatoeba-test.sna-eng.sna.eng 23.3 0.393
Tatoeba-test.swa-eng.swa.eng 4.6 0.197
Tatoeba-test.toi-eng.toi.eng 5.5 0.200
Tatoeba-test.tso-eng.tso.eng 100.0 1.000
Tatoeba-test.umb-eng.umb.eng 11.4 0.285
Tatoeba-test.wol-eng.wol.eng 8.7 0.323
Tatoeba-test.xho-eng.xho.eng 38.4 0.555
Tatoeba-test.yor-eng.yor.eng 15.1 0.331
Tatoeba-test.zul-eng.zul.eng 42.4 0.573