Skip to content

Commit

Permalink
_explain to _evalute in base metric
Browse files Browse the repository at this point in the history
  • Loading branch information
dilyabareeva committed Apr 22, 2024
1 parent 4422e4b commit fcfc7f9
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ SHELL = /bin/bash
.PHONY: style
style:
black .
python3 -m isort .
python -m isort .
rm -f .coverage
rm -f .coverage.*
find . | grep -E "(__pycache__|\.pyc|\.pyo)" | xargs rm -rf
Expand Down
4 changes: 2 additions & 2 deletions src/metrics/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ def __call__(
1) Universal assertions about the passed arguments, incl. checking that the length of train/test datset and
explanations match.
2) Call the _explain method.
2) Call the _evaluate method.
3) Format the output into a unified format for all metrics, possible using some arguments passed in kwargs.
:param model:
Expand All @@ -38,7 +38,7 @@ def __call__(
raise NotImplementedError

@abstractmethod
def _explain(
def _evaluate(
self,
model: torch.nn.Module,
train_dataset: torch.utils.data.Dataset,
Expand Down
2 changes: 1 addition & 1 deletion src/utils/datasets/mark_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ def __init__(
p: float = 0.3,
cls_to_mark: int = 2,
mark_fn: Optional[Union[Callable, str]] = None,
only_train: bool = False
only_train: bool = False,
):
super().__init__()
self.dataset = dataset
Expand Down

0 comments on commit fcfc7f9

Please sign in to comment.