Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BERT Tests #223

Open
dilyabareeva opened this issue Oct 23, 2024 · 0 comments
Open

BERT Tests #223

dilyabareeva opened this issue Oct 23, 2024 · 0 comments
Assignees

Comments

@dilyabareeva
Copy link
Owner

This is part of an effort to adopt quanda as a language modality. Ideally, we want our explainers to support the HuggingFace AutoModelForSequenceClassification model instances (see also #8). I don't know if any of our explainer wrapper classes currently support it.

Step for this issue:

  • Add a model, such as this, in combination with its train dataset, as a test suite -> Can we find a "mini" version of the model? Can we only add a subset of the train set? The tests should be as quick as possible in the end, and take as little memory as possible.
  • If needed, adjust the TRAK wrapper class to be able to process this model and the dataset. See https://github.com/MadryLab/trak/blob/main/examples/qnli.py.
  • In the TRAK example, they create a wrapper for an AutoModelForSequenceClassification. Can we incorporate this wrapper into our code base, so that a TRAK explainer can be initialized with an AutoModelForSequenceClassification model? Please figure out the best solution here.
  • Build TRAK wrapper tests for the newly added language test suit.
  • Adjust the base Explainer accordingly.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants