Skip to content

furiosa-ai/warboy-vision-models

Repository files navigation

Warboy-Vision-Models

The warboy-vision-models project is designed to assist users in running various deep learning vision models on FuriosaAI’s first generation NPU (Neural Processing Unit), Warboy. Users can follow the outlined steps in the project to execute various vision applications, such as Object Detection, Pose Estimation, Instance Segmentation, etc., alongside Warboy.

We hope that the resources here will help you utilize the FuriosaAI Warboy in your applications.

Model List

Currently, the project supports all vision applications provided by YOLO series (YOLOv9, YOLOv8, YOLOv7 and YOLOv5). If you want to explore all available models in the warboy-vision-models repository and detailed performance on Warboy, please refer to the following:

Object Detection

Object detection is a computer vision technique used to identify the presence of specific objects in images or videos and determines their locations. It entails classifying objects in videos or photos (classification) and precisely locating them using bounding boxes, thereby detecting objects through this process.

Performance on Warboy
  • YOLOv8 Object Detection (COCO)
Model Input Size
(pixels)
mAPval
50-95
Accuracy Drop
(%)
Warboy SpeedFusion
(ms)
Warboy SpeedSingle PE
(ms)
YOLOv8n 640x640 34.9
YOLOv8s 640x640 42.9
YOLOv8m 640x640 47.8
YOLOv8l 640x640 50.4
YOLOv8x 640x640 51.4
  • YOLOv7 Object Detection (COCO)
Model Input Size
(pixels)
mAPval
50-95
Accuracy Drop
(%)
Warboy SpeedFusion
(ms)
Warboy SpeedSingle PE
(ms)
YOLOv7 640x640
YOLOv7x 640x640
YOLOv7-w6 1280x1280
YOLOv7-e6 1280x1280
YOLOv7-d6 1280x1280
  • YOLOv5 Object Detection (COCO)
Model Input Size
(pixels)
mAPval
50-95
Accuracy Drop
(%)
Warboy SpeedFusion
(ms)
Warboy SpeedSingle PE
(ms)
YOLOv5n 640x640
YOLOv5s 640x640
YOLOv5m 640x640
YOLOv5l 640x640
YOLOv5x 640x640
YOLOv5n6 1280x1280
YOLOv5s6 1280x1280
YOLOv5m6 1280x1280
YOLOv5l6 1280x1280

Pose Estimation

Pose estimation is a technology that identifies and estimates the posture of a person or object by detecting body parts (typically joints) and using them to estimate the pose of the respective object.

Performance on Warboy

Instance Segmentation

Instance segmentation is a technology that identifies multiple objects in an image or video and delineates the boundaries of each object. In essence, it combines Object Detection and Semantic Segmentation techniques to individually identify multiple objects belonging to the same class and estimate their boundaries.

Performance on Warboy

Documentation

Please refer below for a installation and usage example.

Installation

To use this project, it's essential to install various software components provided by FuriosaAI. For detailed instructions on installing packages, drivers, and the Furiosa SDK, please see the following:

Python SDK requires Python 3.8 or above. pip install required python packages as follows,

pip install -r requirements.txt

and apt install required packages for post processing utilites.

sudo apt-get update
sudo apt-get install cmake libeigen3-dev
./build.sh

Usage Example

Set config files for project

First, download the weight file from YOLOv8 for example execution.

cd warboy-vision-models
wget https://github.com/ultralytics/assets/releases/download/v8.1.0/yolov8n.pt

Before running the project, you need to set up configuration files for model and demo.

  • Model config file : it contains parameters about the model and quantization.
application: object_detection            # vision task (object detection | pose estimation | instance segmentation)
model_name: yolov8n                      # model name
weight: yolov8n.pt                       # weight file path
onnx_path: yolov8n.onnx                  # onnx model path
onnx_i8_path: yolov8n_i8.onnx            # quantized onnx model path

calibration_params:
  calibration_method: SQNR_ASYM               # calibration method
  calibration_data: calibration_data          # calibration data path
  num_calibration_data: 10                    # number of calibration data

confidence_threshold: 0.25
iou_threshold: 0.7
input_shape: [640, 640]         # model input shape (Height, Width)
anchors:                        # anchor information
  - 
class_names:                    # class names
  - ...
  • Demo config file : it contains device informations and video paths for the project.
application: object_detection
model_config: ./cfg/object_detection_model.yaml
model_path: yolov8n_i8.onnx
output_path: output_detection
num_workers: 8
device: warboy(2)*1
video_path: 
  - [set your test video file path]
Export ONNX Next, it is necessary to export the model to the ONNX format.
  • command
    python tools/export_onnx.py cfg/object_detection_model.yaml
Quantizing an ONNX model using Furiosa SDK If you have already exported the model from its original format to an ONNX model, the next step is the model quantization process. Since FuriosaAI's Warboy only supports models in 8-bit integer format (int8), it is necessary to quantize the float32-based model into an int8 data type model.
  • command
    python tools/furiosa_quantizer.py cfg/object_detection_model.yaml
Running the project using Furiosa Runtime In the project, vision applications are executed for videos from multiple channels. To accomplish this effectively, optimization tasks such as Python parallel programming, asynchronous processing, and post-processing using C++ have been included. For a detailed understanding of the project structure, please refer to the following image:
  • command

    python warboy_demo.py cfg/demo.yaml file    # save the result as image file
    python warboy_demo.py cfg/demo.yaml fastAPI # see the result on webpage using fastAPI (http://0.0.0.0:20001)

About

Furiosa Warboy Vision Models

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages