-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to train our dataset #89
Comments
@helsinki please @veer66 @jorgtied @TommiNieminen @Traubert please provide support ,i have large dataset i want to train it using OpusMT i did all installation and run server.py for pretrained model now i have to train my own dataset please provide support on that |
According to the readme this could be a tricky path : "... but documentation needs to be improved. Also, the targets require a specific environment and right now only work well on the CSC HPC cluster in Finland." How about first using MarianMT ? |
Hii ,
Thanks for response.
I have trained model for Indian and multiple Asian languages using marianMT
and able to translate text ,PDF file and Docx file and also text file ..
Thanks for providing such an amazing framework you worked great.
Thanks
Regards,
Akshay Bonde
9860737353
Software Engineer, Pune, India
…On Sat, Oct 28, 2023, 11:17 PM Pierre ***@***.***> wrote:
According to the readme this could be a tricky path :
"... but documentation needs to be improved. Also, the targets require a
specific environment and right now only work well on the CSC HPC cluster in
Finland."
How about first using MarianMT <https://marian-nmt.github.io/docs/> ?
—
Reply to this email directly, view it on GitHub
<#89 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A5X474O5WZM5K5IQFU6DCTTYBVAL5AVCNFSM6AAAAAAVG4LVDWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBTHA4DEMRXGU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
hi i have my own dataset can someone please provide the procedure for how can we train our dataset
The text was updated successfully, but these errors were encountered: