-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Numbers getting changed after translation #24
Comments
You can use our inference pipeline which should handle these cases. You can follow the steps described here. We tried the same example on our demo and it worked fine, the numbers were preserved. |
I just tried for the following sentence on the demo page:
It translated to Hindi as:
Is it handled for floating point cases as well? |
I'm facing the same problem. The model is hallucinating numbers. Any updates on how to fix that? |
General comment about the numeral issue. In some cases, we do observe that the placeholder based approach in the inference engine might result in suboptimal results for certain cases involving numerals as the model hallucinates the placeholder identifier as the actual number instead of retention of the placeholder, which is observed in the example in this comment. You can consider removing the numeral pattern and let the model handle the numerals on its own, to avoid these placeholder-induced hallucinations. |
I see the following difference between training code and inference code: During training / finetuning, the placeholder being used is But during inference, a different tag is being used altogether: Why is this the case? Doesn't this mean that the model isn't primed explicitly to retain the Shouldn't we be using Please correct me if I am wrong somewhere. Thanks! |
Yes, we used the dnt based approach during training, however we do apply a final-stage of fine-tuning on BPCC-seed data, which does not contain much representation of such cases. Therefore, the model slightly loses its ability to work with tags because of lack of representation of DNT cases in the BPCC-seed data. However, in the broader scheme of things, we chose improved translation quality over preserving this ability. The approach doesn't work well with the final models, therefore, we switched to the placeholder based approach which is observed to be very effective in most cases, apart from numbers we don't observe hallucinations in any other case. Doing away with the numeral pattern might be a fix, but this needs to extensively tested. |
Yes, you are correct. We don't explicitly use these ID tags during training, and this was based on an empirical observation that ID tags get preserved by the model in most cases but cannot be 100% guaranteed. |
Cool, thanks! Will finetune with IDs instead of Also just FYI, like you said above, the model does not seem to be working well with Input: Although it ignores the |
I've deployed the model and while inference
231 -> 239. Issue seems be changing the number only when $ is given otherwise it seems to be okay, what's the reason for this and a possible solution?
The text was updated successfully, but these errors were encountered: