You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is part of an effort to adopt quanda as a language modality. Ideally, we want our explainers to support the HuggingFace AutoModelForSequenceClassification model instances (see also #8). I don't know if any of our explainer wrapper classes currently support it.
Step for this issue:
Add a model, such as this, in combination with its train dataset, as a test suite -> Can we find a "mini" version of the model? Can we only add a subset of the train set? The tests should be as quick as possible in the end, and take as little memory as possible.
In the TRAK example, they create a wrapper for an AutoModelForSequenceClassification. Can we incorporate this wrapper into our code base, so that a TRAK explainer can be initialized with an AutoModelForSequenceClassification model? Please figure out the best solution here.
Build TRAK wrapper tests for the newly added language test suit.
Adjust the base Explainer accordingly.
The text was updated successfully, but these errors were encountered:
This is part of an effort to adopt quanda as a language modality. Ideally, we want our explainers to support the HuggingFace AutoModelForSequenceClassification model instances (see also #8). I don't know if any of our explainer wrapper classes currently support it.
Step for this issue:
The text was updated successfully, but these errors were encountered: