Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于数据集大小对测试中checkpoint恢复的影响? #4

Open
weihualiuhupituzi opened this issue Jun 7, 2018 · 0 comments

Comments

@weihualiuhupituzi
Copy link

weihualiuhupituzi commented Jun 7, 2018

您好,很感谢您的代码。我有一个问题想请教,由于我的机器运行vec.py代码时候内存不够大,因此导致了无法将所有的训练数据保存到dataset.pkl中,因此我适当的减小了训练数据集后将vec.py运行通过。之后直接利用30000.ckpt进行预测,可是在saver.restore(sess, '/home/weihua/git/tensorflow/faceID/DeepID1-master/checkpoint/30000.ckpt')这一步总是报错,请问是不是因为恢复的深度模型中的数据必须是与完整的数据集匹配对应上才能进行预测呢?
其中的一部分可能有关的报错为:
InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [1273] rhs shape= [1283] [[Node: save/Assign_11 = Assign[T=DT_FLOAT, _class=["loc:@loss/nn_layer/biases/Variable"], use_locking=true, validate_shape=true,......
非常感谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant