You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a way to get the relevance of model parameters, in addition to the inputs (and intermediate results) thereof?
The PLOS paper ("On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation") mentions per-neuron relevance conservation, hence, I assume it would make sense to keep track of the incoming (or outgoing) relevance of each neuron (and, more generally, each DNN parameter), and make that accessible to the users of the library.
Does that make sense? Thanks!
The text was updated successfully, but these errors were encountered:
Is there a way to get the relevance of model parameters, in addition to the inputs (and intermediate results) thereof?
The PLOS paper ("On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation") mentions per-neuron relevance conservation, hence, I assume it would make sense to keep track of the incoming (or outgoing) relevance of each neuron (and, more generally, each DNN parameter), and make that accessible to the users of the library.
Does that make sense? Thanks!
The text was updated successfully, but these errors were encountered: