LowerConvsToMatMul transformation not working in finn. #1255
Replies: 6 comments 1 reply
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
Hi, yes. I have applied the following transformations before
My model is composed of 2d convolutions. At first, my model was composed of 1d Convolutions. But it didn't support the the following is one of the layers in brevitas.
|
Beta Was this translation helpful? Give feedback.
-
I run that code and I still get the following error.
|
Beta Was this translation helpful? Give feedback.
-
I think I know the problem now. My qonnx model has DequantizeLinear nodes. for example the following is an input to my conv layer.
|
Beta Was this translation helpful? Give feedback.
-
I have another question. I converted from brevitas to qonxx using
Then, I started applying the conversion steps to finn. But, I get an error.
But, I get the following error for the second step.
|
Beta Was this translation helpful? Give feedback.
-
Thank you @fpjentzsch. I was wrong using a wrong conversion function |
Beta Was this translation helpful? Give feedback.
-
I am trying to convert a brevitas model to onnx-finn. When I run the code
onnx_model_wrapper.transform(LowerConvsToMatMul())
, I getAttributeError: 'NoneType' object has no attribute 'transpose'
. I converted from brevitas to onnx usingexport_qonnx
. Why do I get that error?Beta Was this translation helpful? Give feedback.
All reactions