You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think that we should settle for a model(s?) and a dataset before thinking about XAI.
You can use Captum on multiple type of models - if we want to have some XAI demo we could include:
BioBERT 1 for some NLP task; Captum with BERT on SQuAD 2 - word importance, layer interpretability, attention masks).
Some kind of segmentation CNN; ResNet segmentation ablation 3 - this could be cool if we find a dataset in which we can segment more than one class; perturbation, feature ablation 4
Image classification model; ResNet interpretability 5 - different gradient-based attributions
VQA would be awesome, I think that this would be a killer demo for ML on medical data + XAI 6, but I don't know if there's any medical dataset made specifically for VQA, maybe look into it when solving Medical MVP: Choose a dataset for the demo #33 ?
Every method model requirements can be found on 7. A lot of them just require the functions to be differentiable, so we'll probably be able to do some XAI on vision models.
The web interface is nice, it can also be run in jupyter notebooks, I think we could also hack it and add it into a web app.
Using https://github.com/pytorch/captum, we need to find some good benchmarks that show the reason for a verdict/diagnosis.
Our goal is provide a verifiable second opinion on a diagnosis.
The text was updated successfully, but these errors were encountered: