Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Random semantic embedding in SemanticTransformer? #249

Open
stg1205 opened this issue Nov 13, 2023 · 1 comment
Open

Question: Random semantic embedding in SemanticTransformer? #249

stg1205 opened this issue Nov 13, 2023 · 1 comment

Comments

@stg1205
Copy link

stg1205 commented Nov 13, 2023

When we already got the semantic token ids by HubertKmeans, the semantic embeddings are calculated using a randomly initialized embedding layer in SemanticTransformer. So why don't use the cluster centroids of pre-trained Hubert as the embedding?

@biendltb
Copy link
Contributor

The idea of the attention mechanism in the transformer network is to capture the relationship between token ids. Those semantic embeddings are randomly initialized but will be trained or will learn to capture the relationship between tokens in the training process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants