You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I encountered something I didn't understand in the process of reading the code. If you have time, can you help me point it out?
At line 99 of the code/modeling.py, this is:
if posi_emb is not None:
eps=1e-10
batch_size = key_layer.size(0)
patch = key_layer
ad_out, loss_ad = lossZoo.adv_local(patch[:,:,1:], ad_net, is_source)
entropy = - ad_out * torch.log2(ad_out + eps) - (1.0 - ad_out) * torch.log2(1.0 - ad_out + eps)
entropy = torch.cat((torch.ones(batch_size, self.num_attention_heads, 1).to(hidden_states.device).float(), entropy), 2)
trans_ability = entropy if self.vis else None # [B12197]
entropy = entropy.view(batch_size, self.num_attention_heads, 1, -1)
attention_probs = torch.cat((attention_probs[:,:,0,:].unsqueeze(2) * entropy, attention_probs[:,:,1:,:]), 2)
What I don't understand is, is this place a place to fight against discrimination losses? Which part of Vit did you improve?
The text was updated successfully, but these errors were encountered:
Hello, I seem to understand the code above. Is it an improvement of the TAM module you defined? Then I have another question, that is, I want to learn how you can only adapt to the confrontation of discriminators on the basis of VIT? Is the first part of your work in code modeling.py Which part of ?
Hi, thanks for your interest in our paper.
I don't fully understand "only adapt to the confrontation of discriminators on the basis of VIT", can you elaborate it further? Thanks.
Hello, I encountered something I didn't understand in the process of reading the code. If you have time, can you help me point it out?
At line 99 of the code/modeling.py, this is:
if posi_emb is not None:
eps=1e-10
batch_size = key_layer.size(0)
patch = key_layer
ad_out, loss_ad = lossZoo.adv_local(patch[:,:,1:], ad_net, is_source)
entropy = - ad_out * torch.log2(ad_out + eps) - (1.0 - ad_out) * torch.log2(1.0 - ad_out + eps)
entropy = torch.cat((torch.ones(batch_size, self.num_attention_heads, 1).to(hidden_states.device).float(), entropy), 2)
trans_ability = entropy if self.vis else None # [B12197]
entropy = entropy.view(batch_size, self.num_attention_heads, 1, -1)
attention_probs = torch.cat((attention_probs[:,:,0,:].unsqueeze(2) * entropy, attention_probs[:,:,1:,:]), 2)
What I don't understand is, is this place a place to fight against discrimination losses? Which part of Vit did you improve?
The text was updated successfully, but these errors were encountered: