diff --git a/docs/attacks/quantile.html b/docs/attacks/quantile.html index 6e42fbd..c03cb29 100644 --- a/docs/attacks/quantile.html +++ b/docs/attacks/quantile.html @@ -180,7 +180,7 @@
train_dataset
or eval_dataset
. Will
default to [default_data_collator
] if no tokenizer
is provided, an instance of
[DataCollatorWithPadding
] otherwise.
-train_dataset (torch.utils.data.Dataset
or torch.utils.data.IterableDataset
, optional):
+train_dataset (Union[torch.utils.data.Dataset
, torch.utils.data.IterableDataset
, datasets.Dataset
], optional):
The dataset to use for training. If it is a [~datasets.Dataset
], columns not accepted by the
model.forward()
method are automatically removed.
Note that if it's a <code>torch.utils.data.IterableDataset</code> with some randomization and you are training in a
@@ -189,7 +189,7 @@ Args
manually set the seed of this <code>generator</code> at each epoch) or have a <code>set\_epoch()</code> method that internally
sets the seed of the RNGs used.
-eval_dataset (Union[torch.utils.data.Dataset
, Dict[str, torch.utils.data.Dataset
]), optional):
+
eval_dataset (Union[torch.utils.data.Dataset
, Dict[str, torch.utils.data.Dataset
, datasets.Dataset
]), optional):
The dataset to use for evaluation. If it is a [~datasets.Dataset
], columns not accepted by the
model.forward()
method are automatically removed. If it is a dictionary, it will evaluate on each
dataset prepending the dictionary key to the metric name.
diff --git a/docs/config.html b/docs/config.html
index fa45df3..9c1fb4f 100644
--- a/docs/config.html
+++ b/docs/config.html
@@ -172,7 +172,7 @@
mimir.config
mimir.config
var fpr_list : Optional[List[float]]
Process data token-wise?
FPRs at which to compute TPR
var full_doc : Optional[bool]
var tok_by_tok : Optional[bool]
FPRs at which to compute TPR
Process data token-wise?
var token_frequency_map : Optional[str]