Skip to content

Latest commit

 

History

History
84 lines (59 loc) · 8.29 KB

README.md

File metadata and controls

84 lines (59 loc) · 8.29 KB

Deep Learning Explainability of How Model Detected Anomalies on Covid Chest X-rays

CS 6771J1, Deep Learning, Fall 2020
New Jersey Institute of Technology
Project 2 - Deep Learning Project Explainability of Covid Chest Xrays
Group 4 Participants:
Paul Aggarwal
Navneet Kala
Akash Shrivastava

They give you the COVID X-ray / CT Imaging dataset and:

  1. First you find this this implementation of the method called Local Interpretable Model-Agnostic Explanations (i.e. LIME). You also read this article and you get your hands dirty and replicate the results:
    *All Image outputs are presented on their respective Jupyter Notebook files.
    *Please scroll down all the way to the bottom of each file to see the images.
    *We used DCNN RESNET model as specified from the original model.py file

  2. A fellow AI engineer, tells you about another method called SHAP that stands for SHapley Additive exPlanations and she mentions that Shapley was a Nobel prize winner so it must be important. You then find out that Google is using it and wrote a readable white paper about it and your excitement grows. Your manager sees you on the corridor and mentions that your work is needed soon. You are keen to impress her and start writing your 3-5 page summary of the SHAP approach as can be applied to explaining deep learning classifiers such as the ResNet network used in (1):

  3. After your presentation, your manager is clearly impressed with the depth of the SHAP approach and asks for some results for explaining the COVID-19 diagnoses via it. You notice that the extremely popular SHAP Github repo already has an example with VGG16 network applied to ImageNet. You think it wont be too difficult to plugin the model you trained in (1) and explain it: