Skip to content

Latest commit

 

History

History
 
 

nic-eng

opus-2020-07-04.zip

  • dataset: opus
  • model: transformer
  • source language(s): bam_Latn ewe fuc fuv ibo kdx kin lin lug nya run sag sna toi_Latn tso umb wol xho yor zul
  • target language(s): eng
  • model: transformer
  • pre-processing: normalization + SentencePiece (spm32k,spm32k)
  • download: opus-2020-07-04.zip
  • test set translations: opus-2020-07-04.test.txt
  • test set scores: opus-2020-07-04.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.bam-eng.bam.eng 4.9 0.087
Tatoeba-test.ewe-eng.ewe.eng 7.6 0.283
Tatoeba-test.ful-eng.ful.eng 1.2 0.114
Tatoeba-test.ibo-eng.ibo.eng 5.2 0.163
Tatoeba-test.kdx-eng.kdx.eng 3.8 0.181
Tatoeba-test.kin-eng.kin.eng 23.1 0.448
Tatoeba-test.lin-eng.lin.eng 4.1 0.211
Tatoeba-test.lug-eng.lug.eng 9.0 0.275
Tatoeba-test.multi.eng 24.2 0.411
Tatoeba-test.nya-eng.nya.eng 33.0 0.464
Tatoeba-test.run-eng.run.eng 25.0 0.422
Tatoeba-test.sag-eng.sag.eng 0.9 0.086
Tatoeba-test.sna-eng.sna.eng 24.9 0.395
Tatoeba-test.toi-eng.toi.eng 5.3 0.107
Tatoeba-test.tso-eng.tso.eng 76.5 0.855
Tatoeba-test.umb-eng.umb.eng 7.5 0.211
Tatoeba-test.wol-eng.wol.eng 7.4 0.238
Tatoeba-test.xho-eng.xho.eng 35.2 0.518
Tatoeba-test.yor-eng.yor.eng 18.7 0.349
Tatoeba-test.zul-eng.zul.eng 39.1 0.558

opus-2020-07-14.zip

  • dataset: opus
  • model: transformer
  • source language(s): bam_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
  • target language(s): eng
  • model: transformer
  • pre-processing: normalization + SentencePiece (spm32k,spm32k)
  • download: opus-2020-07-14.zip
  • test set translations: opus-2020-07-14.test.txt
  • test set scores: opus-2020-07-14.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.bam-eng.bam.eng 6.0 0.122
Tatoeba-test.ewe-eng.ewe.eng 6.4 0.277
Tatoeba-test.ful-eng.ful.eng 1.2 0.108
Tatoeba-test.ibo-eng.ibo.eng 5.7 0.183
Tatoeba-test.kin-eng.kin.eng 34.3 0.490
Tatoeba-test.lin-eng.lin.eng 6.5 0.207
Tatoeba-test.lug-eng.lug.eng 13.1 0.333
Tatoeba-test.multi.eng 21.8 0.381
Tatoeba-test.nya-eng.nya.eng 35.2 0.513
Tatoeba-test.run-eng.run.eng 26.3 0.428
Tatoeba-test.sag-eng.sag.eng 2.1 0.152
Tatoeba-test.sna-eng.sna.eng 19.8 0.384
Tatoeba-test.toi-eng.toi.eng 3.9 0.182
Tatoeba-test.tso-eng.tso.eng 80.0 0.878
Tatoeba-test.umb-eng.umb.eng 7.9 0.255
Tatoeba-test.wol-eng.wol.eng 7.1 0.168
Tatoeba-test.xho-eng.xho.eng 34.8 0.527
Tatoeba-test.yor-eng.yor.eng 26.8 0.411
Tatoeba-test.zul-eng.zul.eng 45.3 0.604

opus-2020-07-20.zip

  • dataset: opus
  • model: transformer
  • source language(s): bam_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
  • target language(s): eng
  • model: transformer
  • pre-processing: normalization + SentencePiece (spm32k,spm32k)
  • download: opus-2020-07-20.zip
  • test set translations: opus-2020-07-20.test.txt
  • test set scores: opus-2020-07-20.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.bam-eng.bam.eng 4.0 0.085
Tatoeba-test.ewe-eng.ewe.eng 7.5 0.328
Tatoeba-test.ful-eng.ful.eng 0.5 0.105
Tatoeba-test.ibo-eng.ibo.eng 5.8 0.210
Tatoeba-test.kin-eng.kin.eng 27.4 0.449
Tatoeba-test.lin-eng.lin.eng 1.9 0.185
Tatoeba-test.lug-eng.lug.eng 12.7 0.336
Tatoeba-test.multi.eng 21.1 0.373
Tatoeba-test.nya-eng.nya.eng 33.7 0.514
Tatoeba-test.run-eng.run.eng 25.3 0.419
Tatoeba-test.sag-eng.sag.eng 1.9 0.102
Tatoeba-test.sna-eng.sna.eng 17.3 0.367
Tatoeba-test.swa-eng.swa.eng 4.6 0.194
Tatoeba-test.toi-eng.toi.eng 4.3 0.161
Tatoeba-test.tso-eng.tso.eng 80.0 0.878
Tatoeba-test.umb-eng.umb.eng 5.2 0.226
Tatoeba-test.wol-eng.wol.eng 4.4 0.164
Tatoeba-test.xho-eng.xho.eng 35.0 0.520
Tatoeba-test.yor-eng.yor.eng 23.2 0.394
Tatoeba-test.zul-eng.zul.eng 44.0 0.612

opus-2020-07-27.zip

  • dataset: opus
  • model: transformer
  • source language(s): bam_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
  • target language(s): eng
  • model: transformer
  • pre-processing: normalization + SentencePiece (spm32k,spm32k)
  • download: opus-2020-07-27.zip
  • test set translations: opus-2020-07-27.test.txt
  • test set scores: opus-2020-07-27.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.bam-eng.bam.eng 3.4 0.100
Tatoeba-test.ewe-eng.ewe.eng 7.1 0.351
Tatoeba-test.ful-eng.ful.eng 0.6 0.109
Tatoeba-test.ibo-eng.ibo.eng 4.3 0.202
Tatoeba-test.kin-eng.kin.eng 35.5 0.484
Tatoeba-test.lin-eng.lin.eng 3.8 0.188
Tatoeba-test.lug-eng.lug.eng 4.5 0.153
Tatoeba-test.multi.eng 20.3 0.371
Tatoeba-test.nya-eng.nya.eng 34.0 0.493
Tatoeba-test.run-eng.run.eng 24.1 0.416
Tatoeba-test.sag-eng.sag.eng 16.1 0.268
Tatoeba-test.sna-eng.sna.eng 17.7 0.367
Tatoeba-test.swa-eng.swa.eng 3.8 0.189
Tatoeba-test.toi-eng.toi.eng 4.8 0.183
Tatoeba-test.tso-eng.tso.eng 80.0 0.878
Tatoeba-test.umb-eng.umb.eng 5.1 0.220
Tatoeba-test.wol-eng.wol.eng 13.6 0.216
Tatoeba-test.xho-eng.xho.eng 34.8 0.521
Tatoeba-test.yor-eng.yor.eng 22.9 0.393
Tatoeba-test.zul-eng.zul.eng 41.3 0.576

opus2m-2020-08-12.zip

  • dataset: opus2m
  • model: transformer
  • source language(s): bam_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
  • target language(s): eng
  • model: transformer
  • pre-processing: normalization + SentencePiece (spm32k,spm32k)
  • download: opus2m-2020-08-12.zip
  • test set translations: opus2m-2020-08-12.test.txt
  • test set scores: opus2m-2020-08-12.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.bam-eng.bam.eng 2.4 0.090
Tatoeba-test.ewe-eng.ewe.eng 10.3 0.384
Tatoeba-test.ful-eng.ful.eng 1.2 0.114
Tatoeba-test.ibo-eng.ibo.eng 7.5 0.197
Tatoeba-test.kin-eng.kin.eng 30.7 0.481
Tatoeba-test.lin-eng.lin.eng 3.1 0.185
Tatoeba-test.lug-eng.lug.eng 3.1 0.261
Tatoeba-test.multi.eng 21.3 0.377
Tatoeba-test.nya-eng.nya.eng 31.6 0.502
Tatoeba-test.run-eng.run.eng 24.9 0.420
Tatoeba-test.sag-eng.sag.eng 5.2 0.231
Tatoeba-test.sna-eng.sna.eng 20.1 0.374
Tatoeba-test.swa-eng.swa.eng 4.6 0.191
Tatoeba-test.toi-eng.toi.eng 4.8 0.122
Tatoeba-test.tso-eng.tso.eng 100.0 1.000
Tatoeba-test.umb-eng.umb.eng 9.0 0.246
Tatoeba-test.wol-eng.wol.eng 14.0 0.212
Tatoeba-test.xho-eng.xho.eng 38.2 0.558
Tatoeba-test.yor-eng.yor.eng 21.2 0.364
Tatoeba-test.zul-eng.zul.eng 42.3 0.589