Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

师兄,好。咨询一个问题? #1

Open
dugzzuli opened this issue Sep 14, 2020 · 2 comments
Open

师兄,好。咨询一个问题? #1

dugzzuli opened this issue Sep 14, 2020 · 2 comments

Comments

@dugzzuli
Copy link

师兄号,有一个问题关于自表征参数是否训练?

深度自表征空间的方法,原作者实现的代码中,自表征层参数并没有进行反向传播训练。
self.saver = tf.train.Saver([v for v in tf.trainable_variables() if not (v.name.startswith("Coef"))])

而师兄的用pytorch实现的代码中,self.Coefficient = nn.Parameter(1.0e-4 * torch.ones(n, n, dtype=torch.float32), requires_grad=True),好像是加入到反向传播中。

师兄这个是一样的嘛?还是我对于这两个框架理解有问题?

非常感谢师兄。

云南大学,杜国王

@linguoting
Copy link

你好,我现在也在看这份代码,是否可以交流一下

@long123long
Copy link

self.Coefficient(自表示层参数)就是加入一起训练的吧,比较想知道loss_coef = torch.sum(torch.pow(self.self_expression.Coefficient, 2)),对自表示层参数用不同范数loss的区别在哪?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants