-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
关于转onnx的问题 #117
Comments
I think it's because tensor tracking is impossible in the upsampling process. Try this!
def _upsample_add(self, x, y):
# _, _, H, W = y.size()
# return F.interpolate(x, size=(H, W), mode='bilinear') + y
_, _, H, W = y.size()
upsample = nn.Upsample(size=(H, W), mode='bilinear')#, align_corners=True)
return upsample(x) + y
def _upsample(self, x, size, scale=1):
# _, _, H, W = size
# return F.interpolate(x, size=(H // scale, W // scale), mode='bilinear')
_, _, H, W = size
upsample = nn.Upsample(size=(H // scale, W // scale), mode='bilinear')#, align_corners=True)
return upsample(x)
dynamic_axes = {
'in': {
0: 'batch',
2: 'Width',
3: 'Height'
},
'out': {
0: 'batch',
2: 'Height',
3: 'Width'
}
}
torch.onnx.export(
model,
inputData,
"test.onnx",
input_names=["in"],
output_names=["out"],
dynamic_axes=dynamic_axes,
) |
请问这样的改动,是否需要重新训练?然后再生成onnx? |
您这个了的inputData,是我之前在代码中提供的值吗? |
能否同时支持cpu和gpu? |
Please check this code! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
我看过您的convertor代码,已经成功转换为cpu版本,可以运行,大约11s一张图片。
为了进一步提升速度,我尝试转onnx,但是遇到了问题,还请指教,给出正确的转换方法,代码如下(写在TestModel.py的init方法的model.load_state_dict(d)之后):
import onnx
import onnxruntime
export_onnx_file = './net.onnx'
torch.onnx.export(model,
torch.randn(1,1,224,224,device='cuda'),
export_onnx_file,
verbose=False,
input_names = ["inputs"]+["params_%d"%i for i in range(120)],
output_names = ["outputs"],
opset_version = 10
do_constant_folding = True,
dynamic_axes = {"inputs":{0:"batch_size"}, 2:"h", 3:"w", "outputs":{0: "batch_size"}})
The text was updated successfully, but these errors were encountered: