Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I use pytorch-like training process instead of scikit-compatible way of use this lib? #551

Open
kaiwang0112006 opened this issue Aug 22, 2024 · 3 comments
Assignees
Labels
enhancement New feature or request

Comments

@kaiwang0112006
Copy link

I want to do research on tabnet with federated learning, which means I need to get the model weight out and set it back during each epoch of training. It would be easier with a pytorch-like epoch training process of using this lib instead of scikit-compatible way of training.

@kaiwang0112006 kaiwang0112006 added the enhancement New feature or request label Aug 22, 2024
@Optimox
Copy link
Collaborator

Optimox commented Aug 22, 2024

Hello,
I am not sure that I understand your request. But if you want to use the tabnet network simply as a pytorch module and insert it inside your own pipeline you can simply used the modules from here:

class TabNet(torch.nn.Module):

@kaiwang0112006
Copy link
Author

That;s great! What should I give to the parameter "group_attention_matrix" ?

@Optimox
Copy link
Collaborator

Optimox commented Aug 23, 2024

This is an advanced feature, you can leave to None, otherwise you'll need to dig a bit into the code to use it. It's just a matrix of weights on how the attention can work across different features.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants