New blog post: FINN and QONNX #414
maltanar
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
FINN is taking another step towards increasing the flexibility of the framework by supporting a new input format for neural networks, called QONNX. QONNX will enable FINN to be much more flexible in terms of representing weight and activation quantization, especially for higher precisions and fixed-point datatypes. For instance, this will enable future support for higher-precision quantized weights and activations, avoiding streamlining difficulties and expensive MultiThreshold-based activations. QONNX is being developed in close collaboration with hls4ml and will enable closer collaboration between FINN and hls4ml.
Our long-time collaborator and current intern @HenniOVP wrote a blog post about QONNX that has a lot more details, including a Netron interactive visualization of an example network in this new format.
Beta Was this translation helpful? Give feedback.
All reactions