Skip to content

Latest commit

 

History

History
30 lines (26 loc) · 2.34 KB

README.md

File metadata and controls

30 lines (26 loc) · 2.34 KB

ML_YOLO

  • This page guides you in training YOLO-series models using PyTorch or Darknet and converting them into fully quantized TFLite models for easy deployment on MCU/MPU devices. It also includes support for the Vela compiler for ARM NPU devices.

1. Choose the models

  • There are three models available for deploying your customized object detection models on Nuvoton MCUs.
Model Training Framwork Int8 Full Quantization TFLite Folder Description
Yolo Fastest v1.1 Darknet ✔️ yolo_fastest_v1.1
YoloX-nano PyTorch ✔️ yolox_ti_lite_tflite_int8 Some model updates have been made to improve the accuracy of quantized models. Please check the folder link for more details.
Yolov8-nano PyTorch ✔️ yolov8_ultralytics Some model updates have been made to enhance the performance of quantized models. Please check the folder link for more details.

2. Model Comparison

  • Users can select models based on their application usage scenarios.
  • The performance below is generated using the ARM Vela Compiler v3.10 and includes only model inference on the NPU. The mAP values are based on the COCO val2017 dataset and validation scripts for the TFLite INT8 model.

Inference code

  • Yolo_fastest_v1.1

  • YoloX-nano/ Yolov8-nano

    • The ML_SampleCode repositories are private. Please contact Nuvoton to request access to these sample codes. Link
    • MCU: ML_M55M1_SampleCode (private repo)
      • ObjectDetection_FreeRTOS_yoloxn/
      • ObjectDetection_YOLOv8n/