We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
师兄号,有一个问题关于自表征参数是否训练?
深度自表征空间的方法,原作者实现的代码中,自表征层参数并没有进行反向传播训练。 self.saver = tf.train.Saver([v for v in tf.trainable_variables() if not (v.name.startswith("Coef"))])
而师兄的用pytorch实现的代码中,self.Coefficient = nn.Parameter(1.0e-4 * torch.ones(n, n, dtype=torch.float32), requires_grad=True),好像是加入到反向传播中。
师兄这个是一样的嘛?还是我对于这两个框架理解有问题?
非常感谢师兄。
云南大学,杜国王
The text was updated successfully, but these errors were encountered:
你好,我现在也在看这份代码,是否可以交流一下
Sorry, something went wrong.
self.Coefficient(自表示层参数)就是加入一起训练的吧,比较想知道loss_coef = torch.sum(torch.pow(self.self_expression.Coefficient, 2)),对自表示层参数用不同范数loss的区别在哪?
No branches or pull requests
师兄号,有一个问题关于自表征参数是否训练?
深度自表征空间的方法,原作者实现的代码中,自表征层参数并没有进行反向传播训练。
self.saver = tf.train.Saver([v for v in tf.trainable_variables() if not (v.name.startswith("Coef"))])
而师兄的用pytorch实现的代码中,self.Coefficient = nn.Parameter(1.0e-4 * torch.ones(n, n, dtype=torch.float32), requires_grad=True),好像是加入到反向传播中。
师兄这个是一样的嘛?还是我对于这两个框架理解有问题?
非常感谢师兄。
云南大学,杜国王
The text was updated successfully, but these errors were encountered: