Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Research : Embedding Aware Attention #122

Closed
Optimox opened this issue Jun 4, 2020 · 10 comments · Fixed by #443
Closed

Research : Embedding Aware Attention #122

Optimox opened this issue Jun 4, 2020 · 10 comments · Fixed by #443
Labels
enhancement New feature or request Research Research Ideas to improve architecture

Comments

@Optimox
Copy link
Collaborator

Optimox commented Jun 4, 2020

Main Problem

When training with large embedding dimensions, the mask size goes up.

One problem I see is that sparsemax does not know about which columns come from the same embedded columns, this could create something a bit difficult for the model to learn:

  • create embeddings that make sense
  • mask embeddings without destroying them, in fact since sparsemax is sparse it's very unlikely that all the columns from a same embedding are used, so you lose the power of your embedding

Proposed Solutions

It's an open problem but one way I see as promising is to create embedding aware attention.

The idea would be to mask all dimensions from a same embedding the same way, either by using the mean or the max of the initial mask.

I implemented a first version here : #92

If you feel like this is interesting and would like to contribute, please share your ideas in comments or open a PR!

@Optimox Optimox added the enhancement New feature or request label Jun 4, 2020
@Optimox Optimox added the Research Research Ideas to improve architecture label Jun 4, 2020
@joseluismoreira
Copy link

joseluismoreira commented Nov 22, 2020

Great issue! Reading the paper and the both original implementation and this one I made the same question about the embeddings from the same columns be used together. Have you benchmarked that already, somehow?

@joseluismoreira
Copy link

joseluismoreira commented Nov 22, 2020

I guess if we take the mean or the max we will lose the sparcity property... I am not sure, but maybe we can torch.stack the features instead of torch.cat them, and apply the sparse max in this new dimension. I would like to test this idea and maybe we can evaluate together the approachs.

@Optimox
Copy link
Collaborator Author

Optimox commented Nov 22, 2020

Hello @joseluismoreira,

There are two PR on this topic already:

I guess the sparsity is not very important since you will only break the sparsity for embeddings from the same feature, so in the end you'll still be looking at only a few original features.

@joseluismoreira
Copy link

Thank you, @Optimox . The PR 217 seems what I look for. 👍

@W-void
Copy link

W-void commented Oct 15, 2021

After sparsemax, only 1% feature is nonzeros, is it normal?

@Optimox
Copy link
Collaborator Author

Optimox commented Oct 15, 2021

hello @W-void,

The goal of sparsemax activation is to get a sparse mask, with a lot of 0 values. So yes it is expected.

However if you think the masks are too sparse you can play with different parameters:

  • switch from sparsemax to entmax (not sure this will have a big impact)
  • set lambda_sparse to 0 to reduce the penalization on sparsity
  • add more steps and set a larger value of gamma to be sure that the features used by each step are different (in the end the algorithm will use more features)

Let me know if this helped.

@W-void
Copy link

W-void commented Oct 15, 2021

hello @W-void,

The goal of sparsemax activation is to get a sparse mask, with a lot of 0 values. So yes it is expected.

However if you think the masks are too sparse you can play with different parameters:

  • switch from sparsemax to entmax (not sure this will have a big impact)
  • set lambda_sparse to 0 to reduce the penalization on sparsity
  • add more steps and set a larger value of gamma to be sure that the features used by each step are different (in the end the algorithm will use more features)

Let me know if this helped.

@Optimox ,thx!I have set lambda_sparse to 0,It has little improvement. I will switch from sparsemax to entmax as you said.
I'm still confused. If only 2 or 3 features are used, the following FeatTransformer will get little information. I don't think it's a good result.

@SeohuiPark
Copy link

SeohuiPark commented Apr 29, 2022

I have a question about mask and "explain" method.

I checked on Tabmodel.py that there is an "explain" method of the model. I wonder how different masks are printed for each instance of the test data because the "explain" method does not train the model.

Do we use the mask that has been learned until the last epoch of the training data in each row of the test data?

@Optimox
Copy link
Collaborator Author

Optimox commented Apr 29, 2022

The model learns to use its attention layer. For each row, the model decides what column should be masked or not.
During inference, the model decides on its own where to put its attention, nothing is hard coded here and every test row gets a different attention mask. You can use the explain method to see where the model has been looking for every row at every step.

@Optimox
Copy link
Collaborator Author

Optimox commented Dec 1, 2022

@athewsey this has been around for way too long. I have been thinking of a more general way to deal with this and I created a PR here : #443

I'd be happy to have your thoughts on this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Research Research Ideas to improve architecture
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants