Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a documentation page for data quality required for fine-tuning #598

Open
5 tasks done
Aml-Hassan-Abd-El-hamid opened this issue Oct 6, 2024 · 8 comments
Open
5 tasks done
Labels
enhancement New feature or request stale

Comments

@Aml-Hassan-Abd-El-hamid
Copy link

Self Checks

  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

1. Is this request related to a challenge you're experiencing? Tell me about your story.

I'm trying to fine-tune the model to be able to pronounce Egyptian dialect.

I currently have a number of long videos -between 6 to 8 hours- that contain Egyptian books and the corresponding audio for different people reading those books, I'm cutting those audios into segments on silence and matching the segments to the text from the books, but I'm lacking some information to do so, such as:

  1. How long should the ideal audio/text segments be to get the best results?
  2. Should I keep the audio stereo or should I turn it to the mono channel?
  3. Should I resample those audios or keep their original frequency?
  4. should I delete the audio segments with slight background music or should I keep them?
  5. should I keep the punctuation in the text or should I delete them?
  6. Is there any cleaning for the text or the audio that should be done before fine-tuning?

2. Additional context or comments

No response

3. Can you help us with this feature?

  • I am interested in contributing to this feature.
@Aml-Hassan-Abd-El-hamid Aml-Hassan-Abd-El-hamid added the enhancement New feature or request label Oct 6, 2024
@PoTaTo-Mika
Copy link
Collaborator

In the later version, we plan to remove the fine-tune part. Instead, we'll add a series of tools to enhance your reference audio's quality.

@Aml-Hassan-Abd-El-hamid
Copy link
Author

But what if I need to add a new language or a dialect that the model usually doesn't handle? We need to fine-tune the model to accomplish such a task, right?

@PoTaTo-Mika
Copy link
Collaborator

But what if I need to add a new language or a dialect that the model usually doesn't handle? We need to fine-tune the model to accomplish such a task, right?

True, If you want to fine-tune for a new language (though the next version will support most of spoken languages in the world) ,you may need about 2K hours of low quality data, and about 100h (the more, the better) high quality data (44.1khz with high accuracy label).
Hope that this can help you with running the project.

@Aml-Hassan-Abd-El-hamid
Copy link
Author

Thanks a lot for your response, that's really helpful, I have one last question: does the data need to be cut to a certain length?
I have multiple long audios -around 7 to 8 hours each- should I cut them down to shorter segments? and If I should do so, what is the recommended segment length? 15 minutes? 5 minutes? 30 seconds?

@PoTaTo-Mika
Copy link
Collaborator

Thanks a lot for your response, that's really helpful, I have one last question: does the data need to be cut to a certain length? I have multiple long audios -around 7 to 8 hours each- should I cut them down to shorter segments? and If I should do so, what is the recommended segment length? 15 minutes? 5 minutes? 30 seconds?

Yes, we recommend you to cut them into 30s / per segment.

@Aml-Hassan-Abd-El-hamid
Copy link
Author

Thank you very much for your helpful and fast responses

@fishaudio fishaudio deleted a comment from rose07 Oct 16, 2024
@GalenMarek14
Copy link

But what if I need to add a new language or a dialect that the model usually doesn't handle? We need to fine-tune the model to accomplish such a task, right?

True, If you want to fine-tune for a new language (though the next version will support most of spoken languages in the world) ,you may need about 2K hours of low quality data, and about 100h (the more, the better) high quality data (44.1khz with high accuracy label). Hope that this can help you with running the project.

Thank you for your hard work on this project I was wondering if it's possible to provide a rough estimate for when the next model might be available? Even a ballpark estimate would be greatly appreciated.

Copy link
Contributor

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Nov 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale
Projects
None yet
Development

No branches or pull requests

3 participants