You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm struggling to create a stitched IP from a customized mobilenet-v1. My workflow is:
Train the model in brevitas. The model is based on the mobilenet-v1 example with a few modifications (reduced input image size, less output classes, less layers -> bigger average pool). Here is the model in QONNX format (exported with from brevitas.export.export_qonnx): modelnet-v1-customized.zip
def step_mobilenet_add_preprocessing(model: ModelWrapper, cfg: DataflowBuildConfig):
# preprocessing
# Based on https://github.com/Xilinx/finn/blob/b3bdff118ae076cb776af6e51ddc28eeaa0d6390/tests/end2end/test_end2end_mobilenet_v1.py#L91
preproc_onnx = cfg.output_dir + "/intermediate_models/preproc.onnx"
mean = [0.44531356896770125] * 3
std = 0.2692461874154524
ch = 3
preproc = NormalizePreProc(mean, std, ch)
bo.export_finn_onnx(preproc, (1, 3, 32, 32), preproc_onnx)
preproc_model = ModelWrapper(preproc_onnx)
# set input finn datatype to UINT8
preproc_model.set_tensor_datatype(preproc_model.graph.input[0].name, DataType["UINT8"])
preproc_model = preproc_model.transform(InferShapes())
preproc_model = preproc_model.transform(FoldConstants())
preproc_model = preproc_model.transform(GiveUniqueNodeNames())
preproc_model = preproc_model.transform(GiveUniqueParameterTensors())
preproc_model = preproc_model.transform(GiveReadableTensorNames())
preproc_model.save(preproc_onnx)
preproc_model = ModelWrapper(preproc_onnx)
# merge preprocessing and the original model
# Based on: https://github.com/Xilinx/finn/blob/41740ed1a953c09dd2f87b03ebfde5f9d8a7d4f0/tests/end2end/test_end2end_mobilenet_v1.py#L147
model = model.transform(InferShapes())
model = model.transform(FoldConstants())
model = model.transform(InferShapes())
model = model.transform(InferDataTypes())
model = model.transform(InferDataLayouts())
model = model.transform(GiveUniqueNodeNames())
model = model.transform(GiveUniqueParameterTensors())
model = model.transform(GiveReadableTensorNames())
model = model.transform(MergeONNXModels(preproc_model))
return model
This yields:
Running step: step_mobilenet_add_preprocessing [1/11]
Traceback (most recent call last):
File "/home/martin/dev/finn-examples/build/finn/src/finn/builder/build_dataflow.py", line 168, in build_dataflow_cfg
model = transform_step(model, cfg)
[...]
File "/opt/conda/lib/python3.8/site-packages/onnx/shape_inference.py", line 40, in infer_shapes
inferred_model_str = C.infer_shapes(
onnx.onnx_cpp2py_export.shape_inference.InferenceError: [TypeInferenceError] Cannot infer type and shape for node name . No opset import for domainonnx.brevitas optype Quant
Looks like the merge function doesn't understand the QONNX format (in particular Quant from brevitas).
Move step_mobilenet_add_preprocessing after step_qonnx_to_finn. This yields again the error from step 2.
Some failed attempts to split up step_mobilenet_add_preprocessing that are probably not worth listing here.
How can I add the preprocessing and only afterwards convert from QONNX to FINN ONNX? Or rather: How do I get my model to work?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi, I'm struggling to create a stitched IP from a customized mobilenet-v1. My workflow is:
from brevitas.export.export_qonnx
): modelnet-v1-customized.zipstep_qonnx_to_finn
+step_tidy_up
+ default steps from FINN examplesThis yields:
I guess this is expected, since there was no normalization done and the global input is still float32.
finn/tests/end2end/test_end2end_mobilenet_v1.py
Line 91 in b3bdff1
finn/tests/end2end/test_end2end_mobilenet_v1.py
Line 147 in 41740ed
This yields:
Looks like the merge function doesn't understand the QONNX format (in particular
Quant
from brevitas).step_mobilenet_add_preprocessing
afterstep_qonnx_to_finn
. This yields again the error from step 2.step_mobilenet_add_preprocessing
that are probably not worth listing here.How can I add the preprocessing and only afterwards convert from QONNX to FINN ONNX? Or rather: How do I get my model to work?
TIA
Beta Was this translation helpful? Give feedback.
All reactions