Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a method of checking CJK #13

Open
ftyers opened this issue Sep 10, 2021 · 3 comments
Open

Add a method of checking CJK #13

ftyers opened this issue Sep 10, 2021 · 3 comments

Comments

@ftyers
Copy link
Owner

ftyers commented Sep 10, 2021

Perhaps something like PASS to basically return whatever was input and REPL for removing punctuation.

Another option would be something like CB for check Unicode Block.

@wenjie-p
Copy link
Contributor

Hi Fran,

I just noticed this issue right now, but I think maybe I can help with this for Chinese.

Perhaps something like PASS to basically return whatever was input and REPL for removing punctuation.

If I understand correctly, this is used to separate the valid chars from punctuations. Generally, punctuations should not
be considered for AM training for Chinese. But some punctuations like ?! are usually involved strong emotions, thus differ from declarative sentence ended with I think punctuations removal should be taken carefully and should take the transcripts into account.

Another option would be something like CB for check Unicode Block.

I think we can select the valid chars from punctuations for Chinese based on the hex number of unicode.

@ftyers
Copy link
Owner Author

ftyers commented Sep 13, 2021

For punctuation it would be interesting to see the effect of adding them in or not. For many acoustic models, I worry that the kind of information needed for predicting the final punctuation might be quite a long way away from where it needs to be predicted, e.g. maybe intonation difference is clear in the middle of the utterance, but the question mark needs to be predicted at the end.

On the other hand I think that this is an empirical question and could be determined by trying to train a model with and without and looking at the errors.

I think that the "check block" is a nicer example, it would allow us to exclude transcripts which include Latin characters for example. Also, for Chinese, are you mostly training byte-based models, or pinyin/phone-based?

@wenjie-p
Copy link
Contributor

wenjie-p commented Sep 14, 2021

for Chinese, are you mostly training byte-based models, or pinyin/phone-based?

I think it depends. For hybrid system, a pronunciation lexicon is demand to map each character to pinyin most times; while E2E system is lexicon-free and we can adopt BPE as modeling unit. To be honest, my current research interest is not focus on Chinese ASR, but I think people would make their choice of modeling unit selection based on their demands, i.e. model/algorithm to further improve.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants