You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The traditional methods don’t account for relations between word embeddings thus leaving room for more inaccuracy in translation results.
Proposed method
The research paper author aims to fix that problem by proposing a method that introduces two key components of relation embedding and shared embedding. The author claims that it’s key to enhancing the result of the tasks, especially for low-resource tasks.
My Summary
The researchers in this paper proposed their own word embedding method to be used with Neural Machine Translation (NMT) systems. One of the key differences between the method this paper claims to deliver opposed to other methods is that this paper’s method retains the “knowledge of the association between words to the training process” which helps with improving the BLEU performance in several datasets which are low-resource tasks such as WMT’ 14 English->German, and Global Voices v2018q4 Spanish->Czech (i.e., 15k sentence pairs). The paper’s proposed method also delivers a smaller parameter model as a “bonus” (as much as 15%) compared to the baselines. The researcher claims the method works for various NMT systems. However, in future works, it is yet to be tested in other NLP tasks such as dialogue generation and question answering.
Datasets
WMT14 English-German
Global Voices v2018q4 Spanish-Czech
WMT14 English-French
Russian-Spanish
The text was updated successfully, but these errors were encountered:
thangk
changed the title
A Smaller and Better Word Embedding for Neural (2023)
2023-IEEE Access-A Smaller and Better Word Embedding for Neural
Jun 25, 2024
Link: IEEE Access
Main problem
The traditional methods don’t account for relations between word embeddings thus leaving room for more inaccuracy in translation results.
Proposed method
The research paper author aims to fix that problem by proposing a method that introduces two key components of relation embedding and shared embedding. The author claims that it’s key to enhancing the result of the tasks, especially for low-resource tasks.
My Summary
The researchers in this paper proposed their own word embedding method to be used with Neural Machine Translation (NMT) systems. One of the key differences between the method this paper claims to deliver opposed to other methods is that this paper’s method retains the “knowledge of the association between words to the training process” which helps with improving the BLEU performance in several datasets which are low-resource tasks such as WMT’ 14 English->German, and Global Voices v2018q4 Spanish->Czech (i.e., 15k sentence pairs). The paper’s proposed method also delivers a smaller parameter model as a “bonus” (as much as 15%) compared to the baselines. The researcher claims the method works for various NMT systems. However, in future works, it is yet to be tested in other NLP tasks such as dialogue generation and question answering.
Datasets
WMT14 English-German
Global Voices v2018q4 Spanish-Czech
WMT14 English-French
Russian-Spanish
The text was updated successfully, but these errors were encountered: