Skip to content

Commit

Permalink
Update documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Mar 27, 2024
1 parent 13a77a4 commit 2a95067
Show file tree
Hide file tree
Showing 7 changed files with 147 additions and 132 deletions.
4 changes: 2 additions & 2 deletions docs/attacks/loss.html
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ <h1 class="title">Module <code>mimir.attacks.loss</code></h1>
&#34;&#34;&#34;
LOSS-score. Use log-likelihood from model.
&#34;&#34;&#34;
return self.model.get_ll(document, probs=probs, tokens=tokens)</code></pre>
return self.target_model.get_ll(document, probs=probs, tokens=tokens)</code></pre>
</details>
</section>
<section>
Expand Down Expand Up @@ -78,7 +78,7 @@ <h2 class="section-title" id="header-classes">Classes</h2>
&#34;&#34;&#34;
LOSS-score. Use log-likelihood from model.
&#34;&#34;&#34;
return self.model.get_ll(document, probs=probs, tokens=tokens)</code></pre>
return self.target_model.get_ll(document, probs=probs, tokens=tokens)</code></pre>
</details>
<h3>Ancestors</h3>
<ul class="hlist">
Expand Down
8 changes: 4 additions & 4 deletions docs/attacks/min_k.html
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ <h1 class="title">Module <code>mimir.attacks.min_k</code></h1>
@ch.no_grad()
def _attack(self, document, probs, tokens=None, **kwargs):
&#34;&#34;&#34;
Min-k % Prob Attack. Gets model praobbilities and returns likelihood when computed over top k% of ngrams.
Min-k % Prob Attack. Gets model probabilities and returns likelihood when computed over top k% of ngrams.
&#34;&#34;&#34;
# Hyper-params specific to min-k attack
k: float = kwargs.get(&#34;k&#34;, 0.2)
Expand All @@ -55,7 +55,7 @@ <h1 class="title">Module <code>mimir.attacks.min_k</code></h1>
all_prob = (
probs
if probs is not None
else self.model.get_probabilities(document, tokens=tokens)
else self.target_model.get_probabilities(document, tokens=tokens)
)
# iterate through probabilities by ngram defined by window size at given stride
ngram_probs = []
Expand Down Expand Up @@ -94,7 +94,7 @@ <h2 class="section-title" id="header-classes">Classes</h2>
@ch.no_grad()
def _attack(self, document, probs, tokens=None, **kwargs):
&#34;&#34;&#34;
Min-k % Prob Attack. Gets model praobbilities and returns likelihood when computed over top k% of ngrams.
Min-k % Prob Attack. Gets model probabilities and returns likelihood when computed over top k% of ngrams.
&#34;&#34;&#34;
# Hyper-params specific to min-k attack
k: float = kwargs.get(&#34;k&#34;, 0.2)
Expand All @@ -104,7 +104,7 @@ <h2 class="section-title" id="header-classes">Classes</h2>
all_prob = (
probs
if probs is not None
else self.model.get_probabilities(document, tokens=tokens)
else self.target_model.get_probabilities(document, tokens=tokens)
)
# iterate through probabilities by ngram defined by window size at given stride
ngram_probs = []
Expand Down
6 changes: 3 additions & 3 deletions docs/attacks/neighborhood.html
Original file line number Diff line number Diff line change
Expand Up @@ -619,7 +619,7 @@ <h2 class="section-title" id="header-classes">Classes</h2>
</code></dt>
<dd>
<div class="desc"><p>Base class (for LLMs).</p>
<p>Initializes internal Module state, shared by both nn.Module and ScriptModule.</p></div>
<p>Initialize internal Module state, shared by both nn.Module and ScriptModule.</p></div>
<details class="source">
<summary>
<span>Expand source code</span>
Expand Down Expand Up @@ -1074,7 +1074,7 @@ <h3>Inherited members</h3>
</code></dt>
<dd>
<div class="desc"><p>Base class (for LLMs).</p>
<p>Initializes internal Module state, shared by both nn.Module and ScriptModule.</p></div>
<p>Initialize internal Module state, shared by both nn.Module and ScriptModule.</p></div>
<details class="source">
<summary>
<span>Expand source code</span>
Expand Down Expand Up @@ -1367,7 +1367,7 @@ <h3>Inherited members</h3>
</code></dt>
<dd>
<div class="desc"><p>Base class (for LLMs).</p>
<p>Initializes internal Module state, shared by both nn.Module and ScriptModule.</p></div>
<p>Initialize internal Module state, shared by both nn.Module and ScriptModule.</p></div>
<details class="source">
<summary>
<span>Expand source code</span>
Expand Down
10 changes: 5 additions & 5 deletions docs/attacks/quantile.html
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ <h1 class="title">Module <code>mimir.attacks.quantile</code></h1>
# Step 1: Use non-member dataset, collect confidence scores for correct label.
# Get likelihood scores from target model for known_non_members
# Note that these non-members should be different from the ones in testing
scores = [self.model.get_ll(x) for x in known_non_members]
scores = [self.target_model.get_ll(x) for x in known_non_members]
# Construct a dataset out of this to be used in Huggingface, with
# &#34;text&#34; containing the actual data, and &#34;labels&#34; containing the scores
dataset = Dataset.from_dict({&#34;text&#34;: known_non_members, &#34;labels&#34;: scores})
Expand All @@ -133,7 +133,7 @@ <h1 class="title">Module <code>mimir.attacks.quantile</code></h1>
# Step 3: Test by checking if member: score is higher than output of quantile regression model.

# Get likelihood score from target model for doc
ll = self.model.get_ll(document)
ll = self.target_model.get_ll(document)

# Return ll - quantile_model(doc)
tokenized = self.ref_model.tokenizer(document, return_tensors=&#34;pt&#34;)
Expand Down Expand Up @@ -361,7 +361,7 @@ <h3>Methods</h3>
# Step 1: Use non-member dataset, collect confidence scores for correct label.
# Get likelihood scores from target model for known_non_members
# Note that these non-members should be different from the ones in testing
scores = [self.model.get_ll(x) for x in known_non_members]
scores = [self.target_model.get_ll(x) for x in known_non_members]
# Construct a dataset out of this to be used in Huggingface, with
# &#34;text&#34; containing the actual data, and &#34;labels&#34; containing the scores
dataset = Dataset.from_dict({&#34;text&#34;: known_non_members, &#34;labels&#34;: scores})
Expand All @@ -373,7 +373,7 @@ <h3>Methods</h3>
# Step 3: Test by checking if member: score is higher than output of quantile regression model.

# Get likelihood score from target model for doc
ll = self.model.get_ll(document)
ll = self.target_model.get_ll(document)

# Return ll - quantile_model(doc)
tokenized = self.ref_model.tokenizer(document, return_tensors=&#34;pt&#34;)
Expand Down Expand Up @@ -413,7 +413,7 @@ <h3>Methods</h3>
# Step 1: Use non-member dataset, collect confidence scores for correct label.
# Get likelihood scores from target model for known_non_members
# Note that these non-members should be different from the ones in testing
scores = [self.model.get_ll(x) for x in known_non_members]
scores = [self.target_model.get_ll(x) for x in known_non_members]
# Construct a dataset out of this to be used in Huggingface, with
# &#34;text&#34; containing the actual data, and &#34;labels&#34; containing the scores
dataset = Dataset.from_dict({&#34;text&#34;: known_non_members, &#34;labels&#34;: scores})
Expand Down
4 changes: 2 additions & 2 deletions docs/attacks/reference.html
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ <h1 class="title">Module <code>mimir.attacks.reference</code></h1>
&#34;&#34;&#34;
loss = kwargs.get(&#39;loss&#39;, None)
if loss is None:
loss = self.model.get_ll(document, probs=probs, tokens=tokens)
loss = self.target_model.get_ll(document, probs=probs, tokens=tokens)
ref_loss = self.ref_model.get_ll(document, probs=probs, tokens=tokens)
return loss - ref_loss</code></pre>
</details>
Expand Down Expand Up @@ -89,7 +89,7 @@ <h2 class="section-title" id="header-classes">Classes</h2>
&#34;&#34;&#34;
loss = kwargs.get(&#39;loss&#39;, None)
if loss is None:
loss = self.model.get_ll(document, probs=probs, tokens=tokens)
loss = self.target_model.get_ll(document, probs=probs, tokens=tokens)
ref_loss = self.ref_model.get_ll(document, probs=probs, tokens=tokens)
return loss - ref_loss</code></pre>
</details>
Expand Down
4 changes: 2 additions & 2 deletions docs/attacks/zlib.html
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ <h1 class="title">Module <code>mimir.attacks.zlib</code></h1>
&#34;&#34;&#34;
loss = kwargs.get(&#34;loss&#34;, None)
if loss is None:
loss = self.model.get_ll(document, probs=probs, tokens=tokens)
loss = self.target_model.get_ll(document, probs=probs, tokens=tokens)
zlib_entropy = len(zlib.compress(bytes(document, &#34;utf-8&#34;)))
return loss / zlib_entropy</code></pre>
</details>
Expand Down Expand Up @@ -103,7 +103,7 @@ <h2 class="section-title" id="header-classes">Classes</h2>
&#34;&#34;&#34;
loss = kwargs.get(&#34;loss&#34;, None)
if loss is None:
loss = self.model.get_ll(document, probs=probs, tokens=tokens)
loss = self.target_model.get_ll(document, probs=probs, tokens=tokens)
zlib_entropy = len(zlib.compress(bytes(document, &#34;utf-8&#34;)))
return loss / zlib_entropy</code></pre>
</details>
Expand Down
Loading

0 comments on commit 2a95067

Please sign in to comment.