diff --git a/docs/doc/en/README.md b/docs/doc/en/README.md
index e69de29b..0ad7179d 100644
--- a/docs/doc/en/README.md
+++ b/docs/doc/en/README.md
@@ -0,0 +1,183 @@
+---
+title: MaixPy Quick Start
+---
+
+
+
+
+> For an introduction to MaixPy, please see the [MaixPy official website homepage](../../README.md)
+
+## Get a MaixCAM Device
+
+Purchase the MaixCAM development board from the [Sipeed Taobao](https://item.taobao.com/item.htm?id=784724795837) or [Sipeed AliExpress](https://www.aliexpress.com/store/911876460) store.
+
+**It is recommended to purchase the bundle with a `TF card`, `camera`, `2.3-inch touchscreen`, `case`, `Type-C data cable`, `Type-C one-to-two mini board`, and `4P serial port socket+cable`**, which will be convenient for later use and development. **The following tutorials assume that you already have these accessories** (including the screen).
+
+If you did not purchase a TF card, you will need to **prepare** a **TF card reader** to flash the system.
+
+>! Note that currently only the MaixCAM development board is supported. Other development boards with the same chip are not supported, including Sipeed's development boards with the same chip. Please be careful not to purchase the wrong board, which could result in unnecessary waste of time and money.
+
+## Getting Started
+
+### Prepare the TF Image Card and Insert it into the Device
+
+If the package you purchased includes a TF card, it already contains the factory image. If the TF card was not installed in the device at the factory, you will first need to carefully open the case (be careful not to tear the ribbon cables inside) and then insert the TF card. Additionally, since the firmware from the factory may be outdated, you can follow the instructions on [Upgrading and Flashing the System](https://wiki.sipeed.com/maixpy/doc/zh/basic/os.html) to upgrade the system to the latest version.
+
+If you did not purchase a TF card, you need to flash the system onto a self-provided TF card. Please refer to [Upgrading and Flashing the System](./basic/os.md) for the flashing method, and then install it on the board.
+
+### Power On
+
+Use a `Type-C` data cable to connect the `MaixCAM` device and power it on. Wait for the device to boot up and enter the function selection interface.
+
+![maixcam_font](../../static/image/maixcam_font.png)
+
+If the screen does not display:
+* Please confirm that you purchased the bundled TF card. If you confirm that you have a TF card and it is inserted into the device, you can try [updating to the latest system](./basic/os.md).
+* If you did not purchase the TF card bundle, you need to follow the instructions in [Upgrading and Flashing the System](./basic/os.md) to flash the latest system onto the TF card.
+* Also, ensure that the screen and camera cables are not loose. The screen cable can easily come off when opening the case, so be careful.
+
+### Connect to the Network
+
+For the first run, you need to connect to the network, as you will need it later to activate the device and use the IDE.
+
+* On the device, click `Settings`, select `WiFi`, and click the `Scan` button to start scanning for nearby `WiFi`. You can click several times to refresh the list.
+* Find your WiFi hotspot. If you don't have a router, you can use your phone as a hotspot.
+* Enter the password and click the `Connect` button to connect.
+* Wait for the `IP` address to be obtained. This may take `10` to `30` seconds. If the interface does not refresh, you can exit the `WiFi` function and re-enter to check, or you can also see the `IP` information in `Settings` -> `Device Info`.
+
+### Update the Runtime Libraries
+
+**This step is very important!!!** If this step is not done properly, other applications and functions may not work (e.g., they may crash).
+
+* First, ensure that you have completed the previous step of connecting to WiFi and have obtained an IP address to access the internet.
+* On the device, click `Settings`, and select `Install Runtime Libraries`.
+* After the installation is complete, you will see that it has been updated to the latest version. Then exit.
+
+If it shows `Request failed` or `请求失败` (Request failed), please first check if the network is connected. You need to be able to connect to the internet. If it still doesn't work, please take a photo and contact customer service for assistance.
+
+### Use Built-in Applications
+
+Many applications are built-in, such as Find Blobs, AI Detector, Line Follower, etc. For example, Find Blobs:
+
+
+
+Please explore other applications on your own. More applications will be updated in the future. For usage documentation and application updates, please see the [MaixHub App Store](https://maixhub.com/app).
+
+**Note: The applications only include a part of the functionality that MaixPy can achieve. Using MaixPy, you can create even more features.**
+
+## Use as a Serial Module
+
+> If you want to use the device as the main controller (or if you don't understand what a serial module is), you can skip this step.
+
+The built-in applications can be used directly as serial modules, such as `Find Blobs`, `Find Faces`, `Find QR Codes`, etc.
+
+Usage:
+* Hardware connection: You can connect the device to the `Type-C one-to-two mini board`, which allows you to connect the device via serial to your main controller, such as `Arduino`, `Raspberry Pi`, `STM32`, etc.
+* Open the application you want to use, such as QR code recognition. When the device scans a QR code, it will send the result to your main controller via serial.
+> The serial baud rate is `115200`, the data format is `8N1`, and the protocol follows the [Maix Serial Communication Protocol Standard](https://github.com/sipeed/MaixCDK/blob/master/docs/doc/convention/protocol.md). You can find the corresponding application introduction on the [MaixHub APP](https://maixhub.com/app) to view the protocol.
+
+## Prepare to Connect the Computer and Device
+
+To allow the computer (PC) and the device (MaixCAM) to communicate later, we need to have them on the same local area network. Two methods are provided:
+* **Method 1 (strongly recommended)**: Wireless connection. The device uses WiFi to connect to the same router or WiFi hotspot as the computer. You can connect to your WiFi in the device's `Settings -> WiFi Settings`.
+* **Method 2**: Wired connection. The device connects to the computer via a USB cable, and the device will act as a virtual USB network card, allowing it to be on the same local area network as the computer via USB.
+
+> Method 2 may encounter some problems due to the need for USB and drivers, so it is recommended to start with WiFi instead. You can find common issues in the [FAQ](./faq.md).
+
+
+.. details::Method 2 has different setup methods on different computer systems, click to expand
+ * **Linux**: No additional setup is required. Just plug in the USB cable. Use `ifconfig` or `ip addr` to view the `usb0` network card. **Note** that the IP address you see here, e.g., `10.131.167.100`, is the computer's IP. The device's IP is the last octet changed to `1`, i.e., `10.131.167.1`.
+ * **Windows**: You can first confirm if a RNDIS device has been added in the `Network Adapters`. If so, you can use it directly. Otherwise, you need to manually install the RNDIS network card driver:
+ * Open the computer's `Device Manager`.
+ * Then find a RNDIS device with a question mark under `Other Devices`, right-click and select `Update Driver Software`.
+ * Select `Browse my computer for driver software`.
+ * Select `Let me pick from a list of available drivers on my computer`.
+ * Select `Network Adapters`, then click `Next`.
+ * On the left, select `Microsoft`, on the right, select `Remote NDIS Compatible Device`, then click `Next`, and select `Yes`.
+ * After installation, the effect is as follows:
+ ![RNDIS](../../static/image/rndis_windows.jpg)
+ * **MacOS**: No additional setup is required. Just plug in the USB cable. Use `ifconfig` or `ip addr` to view the `usb0` network card. **Note** that the IP address you see here, e.g., `10.131.167.100`, is the computer's IP. The device's IP is the last octet changed to `1`, i.e., `10.131.167.1`.
+
+ ## Prepare the Development Environment
+
+ * Download and install [MaixVision](https://wiki.sipeed.com/maixvision).
+ * Connect the device and computer with a Type-C cable, open MaixVision, and click the `"Connect"` button in the bottom left corner. It will automatically search for devices. After a short wait, you will see the device, and you can click the connect button next to it to connect to the device.
+
+ If **no device is detected**, you can also manually enter the device's IP address in the **device**'s `Settings -> Device Info`. You can also find solutions in the [FAQ](./faq.md).
+
+ **After a successful connection, the function selection interface on the device will disappear, and the screen will turn black, releasing all hardware resources. If there is still an image displayed, you can disconnect and reconnect.**
+
+ Here is a video example of using MaixVision:
+
+
+
+ ## Run Examples
+
+ Click `Example Code` on the left side of MaixVision, select an example, and click the `Run` button in the bottom left to send the code to the device for execution.
+
+ For example:
+ * `hello_maix.py`: Click the `Run` button, and you will see messages printed from the device in the MaixVision terminal, as well as an image in the upper right corner.
+ * `camera_display.py`: This example will open the camera and display the camera view on the screen.
+ ```python
+ from maix import camera, display, app
+
+ disp = display.Display() # Construct a display object and initialize the screen
+ cam = camera.Camera(640, 480) # Construct a camera object, manually set the resolution to 640x480, and initialize the camera
+ while not app.need_exit(): # Keep looping until the program exits (you can exit by pressing the function key on the device or clicking the stop button in MaixVision)
+ img = cam.read() # Read the camera view and save it to the variable img, you can print(img) to print the details of img
+ disp.show(img) # Display img on the screen
+ ```
+ * `yolov5.py` will detect objects in the camera view, draw bounding boxes around them, and display them on the screen. It supports detection of 80 object types. For more details, please see [YOLOv5 Object Detection](./vision/yolov5.md).
+
+ You can try other examples on your own.
+
+> If you encounter image display stuttering when using the camera examples, it may be due to poor network connectivity, or the quality of the USB cable or the host's USB being too poor. You can try changing the connection method or replacing the cable, host USB port, or computer.
+
+ ## Install Applications on the Device
+
+ The above examples run code on the device, but the code will stop running when `MaixVision` is disconnected. If you want the code to appear in the boot menu, you can package it as an application and install it on the device.
+
+ Click the `Install App` button in the bottom left corner of `MaixVision`, fill in the application information, and the application will be installed on the device. Then you will be able to see the application on the device.
+ You can also choose to package the application and share your application to the [MaixHub App Store](https://maixhub.com/app).
+
+> The default examples do not explicitly write an exit function, so you can exit the application by pressing the function key on the device. (For MaixCAM, it is the user key.)
+
+ If you want the program to start automatically on boot, you can set it in `Settings -> Boot Startup`.
+
+ ## Next Steps
+
+ If you like what you've seen so far, **please be sure to give the MaixPy open-source project a star on [GitHub](https://github.com/sipeed/MaixPy) (you need to log in to GitHub first). Your star and recognition is the motivation for us to continue maintaining and adding new features!**
+
+ Up to this point, you've experienced the usage and development workflow. Next, you can learn about `MaixPy` syntax and related features. Please follow the left sidebar to learn. If you have any questions about using the API, you can look it up in the [API documentation](/api/).
+
+ It's best to learn with a specific purpose in mind, such as working on an interesting small project. This way, the learning effect will be better. You can share your projects and experiences on the [MaixHub Share Plaza](https://maixhub.com/share) and receive cash rewards!
+
+ ## Share and Discuss
+
+ * **[MaixHub Project and Experience Sharing](https://maixhub.com/share)**: Share your projects and experiences, and receive cash rewards. The basic requirements for receiving official rewards are:
+ * **Reproducible**: A relatively complete process for reproducing the project.
+ * **Showcase**: No detailed project reproduction process, but an attractive project demonstration.
+ * **Bug-solving experience**: Sharing the process and specific solution for resolving a particular issue.
+ * [MaixPy Official Forum](https://maixhub.com/discussion/maixpy) (for asking questions and discussion)
+ * Telegram: [MaixPy](https://t.me/maixpy)
+ * MaixPy Source Code Issues: [MaixPy issue](https://github.com/sipeed/MaixPy/issues)
+ * For business cooperation or bulk purchases, please contact support@sipeed.com.
diff --git a/docs/doc/en/audio/recognize.md b/docs/doc/en/audio/recognize.md
new file mode 100644
index 00000000..e69de29b
diff --git a/docs/doc/en/basic/app.md b/docs/doc/en/basic/app.md
new file mode 100644
index 00000000..cd9b9eb8
--- /dev/null
+++ b/docs/doc/en/basic/app.md
@@ -0,0 +1,38 @@
+---
+title: App development and app stores
+---
+
+## Introduction to Application Ecosystem
+
+In order to make the development board ready to use out of the box, make it easy for users to use without barriers, enable developers to share their interesting applications, and provide effective channels for receiving feedback and even profits, we have launched a simple application framework, including:
+
+- **[App Store](https://maixhub.com/app)**: Developers can upload and share applications, which users can download and use without needing to develop them. Developers can receive certain cash rewards (from MaixHub or user tips).
+- **Pre-installed Apps**: The official provides some commonly used applications, such as color block detection, AI object detection tracking, QR code scanning, face recognition, etc., which users can use directly or use as serial module.
+- **MaixPy + MaixCDK Software Development Kit**: Using [MaixPy](https://github.com/sipeed/maixpy) or [MaixCDK](https://github.com/sipeed/MaixCDK), you can quickly develop embedded AI visual and audio applications in Python or C/C++, efficiently realizing your interesting ideas.
+- **MaixVision Desktop Development Tool**: A brand-new desktop code development tool for quick start, debugging, running, uploading code, installing applications to devices, one-click development, and even support for graphical block-based programming, making it easy for elementary school students to get started.
+
+Everyone is welcome to pay attention to the App Store and share their applications in the store to build a vibrant community together.
+
+
+## Packaging Applications
+
+Using MaixPy + MaixVison makes it easy to develop, package, and install applications:
+- Develop applications with MaixPy in MaixVision, which can be a single file or a project directory.
+- Connect the device.
+- Click the "Install" button at the bottom-left corner of MaixVision, fill in the basic information of the application in the popup window, where the ID is used to identify the application. A device cannot simultaneously install different applications with the same ID, so the ID should be different from the IDs of applications on MaixHub. The application name can be duplicated. You can also upload an icon.
+- Click "Package Application" to package the application into an installer. If you want to upload it to the [MaixHub App Store](https://maixhub./com/app), you can use this packaged file.
+- Click "Install Application" to install the packaged application on the device.
+- Disconnect from the device, and you will see your application in the device's app selection interface. Simply click on it to run the application.
+
+> If you develop with MaixCDK, you can use `maixcdk release` to package an application. Refer to the MaixCDK documentation for specifics.
+
+## Exiting Applications
+
+If you have developed a relatively simple application without a user interface and a back button, you can exit the application by pressing the device's function button (usually labeled as USER, FUNC, or OK) or the back button (if available, MaixCAM does not have this button by default).
+
+
+## Basic Guidelines for Application Development
+
+- Since touchscreens are standard, it is recommended to create a simple interface with touch interaction. You can refer to examples for implementation methods.
+- Avoid making interfaces and buttons too small, as MaixCAM default screen is 2.3 inches with 552x368 resolution and high PPI. Make sure fingers can easily tap without making mistakes.
+- Implement a simple serial interaction for the main functionality of each application based on the [serial protocol](https://github.com/sipeed/MaixCDK/blob/master/docs/doc/convention/protocol.md) (see [example](https://github.com/sipeed/MaixPy/tree/main/examples/communication/protocol)). This way, users can directly use it as a serial module. For instance, in a face detection application, you can output coordinates via serial port when a face is detected.
diff --git a/docs/doc/en/basic/app_usage.md b/docs/doc/en/basic/app_usage.md
new file mode 100644
index 00000000..d1caee2e
--- /dev/null
+++ b/docs/doc/en/basic/app_usage.md
@@ -0,0 +1,12 @@
+---
+title: Application User Guide
+---
+
+After powering on, it will automatically enter the application selection interface, where various built-in applications are available in the [MaixHub App Store](https://maixhub.com/app). Here you can find descriptions and instructions for using each corresponding application.
+
+The commonly used settings are `Settings -> Language`, as well as `Settings -> WiFi`. The `App Store` application can be used for upgrading and installing applications. Once connected to a WiFi network that has internet access, you can scan and install applications from the [MaixHub App Store](https://maixhub.com/app).
+
+Moreover, applications you develop can also be uploaded to the [MaixHub App Store](https://maixhub.com/app) to share with others. High-quality and outstanding applications will receive official red envelope rewards, and excellent applications will gain recognition and support from everyone.
+
+Whether it's a simple application for collecting sensor data or a complex function application, let's work together to create more interesting things!
+
diff --git a/docs/doc/en/basic/linux_basic.md b/docs/doc/en/basic/linux_basic.md
new file mode 100644
index 00000000..79bbfc6a
--- /dev/null
+++ b/docs/doc/en/basic/linux_basic.md
@@ -0,0 +1,62 @@
+---
+title: Basic Knowledge of Linux
+---
+
+## Introduction
+
+For beginners just starting out, you can skip this chapter for now and come back to it after mastering the basics of MaixPy development.
+
+The latest MaixPy supports running Linux on the MaixCAM hardware, so the underlying MaixPy development is based on the Linux system. Although Sipeed has done a lot of work for developers with MaixPy, making it possible to enjoy using it without knowledge of the Linux system, there might be situations where some low-level operations are necessary or for the convenience of developers unfamiliar with Linux. In this section, we will cover some basic Linux knowledge.
+
+## Why Linux System is Needed
+
+Specific reasons can be researched individually. Here are a few examples in simplified terms that may not sound too technical but are easy for beginners to understand:
+* In microcontrollers, our program is usually a loop, but with Linux, we can run multiple programs simultaneously, each appearing to run independently, where the actual execution is handled by the operating system.
+* With a large community of Linux-based developers, required functionalities and drivers can be easily found without the need to implement them from scratch.
+* Linux offers a rich set of accompanying software tools for convenient development and debugging. Some Linux common tools not mentioned in this tutorial can theoretically be used as well.
+
+## File System
+
+What is a file system?
+* Similar to a computer's file system, Linux manages hardware disks using a file system, making it easy for us to read and write data to the disk.
+* For students who have learned about microcontrollers but not familiar with file system development, imagine having a Flash or TF card where data can be read and written through APIs even after power loss. However, Flash has read/write limitations, requiring a program to ensure its longevity. A file system is like a mature program that manages the Flash space and read/write operations. By calling the file system's APIs, we can significantly reduce development work and ensure stability and security with proven programs.
+
+## Transferring Files between Computer and Device (Development Board)
+
+Since the device has Linux and a file system, how do we send files to it?
+
+For MaixPy, we offer MaixVision for file management in future versions. Before that, you can use the following method:
+
+Here we mainly discuss transferring files through the network. Other methods can be explored on your own by searching for "transferring files to Linux":
+* Ensure the device and computer are connected to the same local network, for example:
+ * When the MaixCAM's USB port is connected to the computer, a virtual network card is created which can be seen in the device manager on the computer, and the device's IP can be found in the device's `Settings -> Device Information`.
+ * Alternatively, connect to the same local network on the device through `Settings -> WiFi`.
+* Use SCP or SFTP protocols on the computer to transfer files to the device. There are many specific software options and methods, such as:
+ * On Windows, you can use WinSCP, FileZilla, or the scp command.
+ * On Linux, use FileZilla or the scp command.
+ * On Mac, use FileZilla or the scp command.
+
+## Terminal and Command Line
+
+The terminal is a tool for communicating with and operating the Linux system, similar to Windows' `cmd` or `PowerShell`.
+
+For example, we can enter `ssh root@maixcam-xxxx.local` in the Terminal tool on a Windows system with PowerShell or on a Linux system. You can find the specific name in the device's `Settings->Device Information`, which allows us to connect to the device through the terminal (both username and password are `root`).
+
+Then, we can operate the device by entering commands. For instance, the `ls` command can list the files in the current directory of the device, while `cd` is used to switch to a different directory (similar to clicking folders in file management on a computer),
+
+```shell
+cd / # Switch to the root directory
+ls # Display all files in the current directory (root directory)
+```
+
+This will display similar content as below:
+
+```shell
+bin lib media root tmp
+boot lib64 mnt run usr
+dev linuxrc opt sbin var
+etc lost+found proc sys
+```
+
+For more command learning, please search for `Linux command line usage tutorials` on your own. This is just to introduce beginners to basic concepts so that when developers mention them, they can understand what they mean.
+
diff --git a/docs/doc/en/basic/maixpy_upgrade.md b/docs/doc/en/basic/maixpy_upgrade.md
new file mode 100644
index 00000000..c7ba3598
--- /dev/null
+++ b/docs/doc/en/basic/maixpy_upgrade.md
@@ -0,0 +1,23 @@
+---
+title: Update MaixPy.
+---
+
+There are two methods to begin with. If you are new to this and want to keep things simple, you can try using the pre-installed MaixPy firmware on the TF card that comes with the device. You can consider updating it later.
+
+However, since we don't know when the TF card you received was manufactured, it is recommended to update the system.
+
+## Updating the System Directly
+
+Follow the steps in [Upgrading and Flashing the System](./os.md) to upgrade to the latest system, which already includes the newest MaixPy firmware.
+
+## Updating Only the MaixPy Firmware
+
+Check the latest version information and release notes in the [MaixPy repository release page](https://github.com/sipeed/MaixPy/releases). It includes details about the MaixPy firmware and the system information corresponding to each version.
+
+If you prefer not to update the system (since system changes are usually minimal, you can check if there are any system-related changes in the MaixPy update notes before deciding whether to update the system), you can simply update the MaixPy firmware.
+
+* Set up WiFi in the settings to connect the system to the internet.
+* Click on `Update MaixPy` in the settings app to proceed with the update.
+
+> If you are comfortable using the terminal, you can also update MaixPy by using `pip install MaixPy -U` in the terminal.
+
diff --git a/docs/doc/en/basic/maixvision.md b/docs/doc/en/basic/maixvision.md
new file mode 100644
index 00000000..13fc5d6b
--- /dev/null
+++ b/docs/doc/en/basic/maixvision.md
@@ -0,0 +1,84 @@
+---
+title: MaixVision - MaixPy Programming + Graphical Block Programming
+---
+
+## Introduction
+
+MaixVision is a developer programming tool specifically designed for the Maix ecosystem, supporting MaixPy programming and graphical block programming. It also supports online running, debugging, and real-time image preview, allowing the synchronization of the device display screen for easy debugging and development.
+
+It also supports packaging applications and installing them on devices, making it convenient for users to generate and install applications with a single click.
+
+Additionally, it integrates some handy development tools, such as file management, threshold editors, QR code generators, and more.
+
+## Using MaixPy Programming and Online Running
+
+By following the steps in the [Quick Start](../README.md), we can easily use MaixPy programming and run programs online.
+
+## Real-time Image Preview
+
+MaixPy provides a `display` module, which can display images on the screen. When calling the `show` method of the `display` module, the image will be sent to MaixVision for display in real-time, for example:
+
+```python
+from maix import display, camera
+
+cam = camera.Camera(640, 480)
+disp = display.Display()
+while 1:
+ disp.show(cam.read())
+```
+
+Here, we capture an image using the camera, and then display it on the screen using `disp.show()`, which will also transmit the image to MaixVision for display.
+
+By clicking the `Pause` button in the top right corner, the transmission of the image to MaixVision display will stop.
+
+## Computing the Histogram of an Image
+
+In the previous step, we could see the image in real-time on MaixVision. By selecting a region with the mouse, we can view the histogram of that area below the image. Choosing different color representation methods allows us to see histograms of different color channels. This feature helps us find suitable parameters when working on image processing algorithms.
+
+## Using Graphical Block Programming
+
+Currently in development, stay tuned for updates.
+
+## Distinguishing Between `Device File System` and `Computer File System`
+
+An important concept to grasp here is distinguishing between the **`Computer File System`** and **`Device File System`**:
+
+- **Computer File System**: This operates on the computer. Opening files or projects in MaixVision accesses files stored on the computer. Any changes are automatically saved to the computer's file system.
+- **Device File System**: When a program runs, it sends files to the device for execution. Therefore, files accessed within the code are read from the device's file system.
+
+A common issue arises when a file is saved on the computer at `D:\data\a.jpg`, and then the file is referenced on the device like `img = image.load("D:\data\a.jpg")`. This file cannot be found on the device because there is no `D:\data\a.jpg` file stored there.
+
+For specific instructions on transferring computer files to the device, please refer to the following section.
+
+## Transferring Files to the Device
+
+Currently in development. In the meantime, you can use alternative tools:
+
+Begin by knowing the device's IP address or device name, which MaixVision can search for, or check in the device's `Settings -> System Information`, where you might find something similar to `maixcam-xxxx.local` or `192.168.0.123`. The username and password are both `root`, and the file transfer protocol is `SFTP` with port number `22`.
+
+There are various user-friendly software options available for different operating systems:
+
+### For Windows
+
+Use tools like [WinSCP](https://winscp.net/eng/index.php) or [FileZilla](https://filezilla-project.org/) to connect to the device via `SFTP`. Provide the necessary device and account information to establish the connection.
+
+For further guidance, perform a quick online search.
+
+### For Linux
+
+Use the `scp` command in the terminal to transfer files to the device, for example:
+
+```bash
+scp /path/to/your/file.py root@maixcam-xxxx.local:/root
+```
+
+### For Mac
+
+- **Method 1**: Use the `scp` command in the terminal to transfer files to the device, for example:
+
+```bash
+scp /path/to/your/file.py root@maixcam-xxxx.local:/root
+```
+
+* **Method 2**: Use tools like [FileZilla](https://filezilla-project.org/) to connect to the device, transfer the files to the device, choose the `SFTP` protocol, fill in the device and account information, and connect.
+
diff --git a/docs/doc/en/basic/os.md b/docs/doc/en/basic/os.md
new file mode 100644
index 00000000..d8cde7c7
--- /dev/null
+++ b/docs/doc/en/basic/os.md
@@ -0,0 +1,28 @@
+---
+title: Upgrade and burn system.
+---
+
+## Introduction
+
+If you have purchased the official (Sipeed) package with a TF card, typically the system has already been pre-programmed at the factory and can be used directly without further steps.
+
+However, to avoid using an outdated version of the pre-programmed system, it is highly recommended to first upgrade to the latest system following the tutorial.
+
+## How to Confirm if System Upgrade is Needed
+
+* Upon booting up to the main menu, click on `Settings`, then `Device Info` to check the system's version number.
+* Visit the [MaixPy Release History page](https://github.com/sipeed/MaixPy/releases) to review the update logs, which contain information on MaixPy firmware and system image updates. If there are significant updates after your current version, it is advisable to upgrade.
+
+ > If the latest system update only includes routine MaixPy firmware updates compared to your current system, you may choose not to upgrade. You can simply update `MaixPy` separately in `Settings` under `Update MaixPy`.
+
+## Obtaining the Latest System
+
+Visit the [MaixPy Release page](https://github.com/sipeed/MaixPy/releases) to find the latest system image file, such as `maixcam_os_20240401_maixpy_v4.1.0.xz`.
+
+Alternate link:
+* [Sourceforge](https://sourceforge.net/projects/maixpy/files/)
+
+## Burning the System Image to MaixCAM
+
+Refer to the [MaixCAM System Flashing Guide](https://wiki.sipeed.com/hardware/zh/maixcam/os.html).
+
diff --git a/docs/doc/en/basic/python.md b/docs/doc/en/basic/python.md
new file mode 100644
index 00000000..40845099
--- /dev/null
+++ b/docs/doc/en/basic/python.md
@@ -0,0 +1,58 @@
+---
+title: Basic Knowledge of Python
+---
+
+The tutorial documentation of MaixPy does not delve into specific Python syntax tutorials because there are already too many excellent Python tutorials available. Here, we only introduce what needs to be learned, provide guidance on directions and paths.
+
+## Introduction to Python
+
+Python is an interpreted, object-oriented, dynamically typed high-level programming language.
+* Interpreted: It does not require compilation, runs directly. The advantage is rapid development, while a minor drawback is the slower execution speed due to code interpretation on each run. However, most often, the bottleneck lies in the developer's code rather than the language itself.
+* Object-oriented: It supports object-oriented programming, allowing the definition of classes and objects. Compared to procedural languages, it is easier to organize code. For more details, please search independently.
+* Dynamically typed: Variables do not need to declare types, can be assigned directly, and the type will be automatically determined based on the assignment. This reduces code volume, but can also lead to type errors, requiring the developer's attention.
+
+In conclusion, for developers unfamiliar with Python, it is very easy to get started as Python offers plenty of ready-to-use libraries, a large developer community, short application development cycles, making it highly worthwhile to learn!
+
+## Python Environment Setup
+
+You can install Python on your computer according to the Python tutorial you are following for learning.
+Alternatively, you can connect to a device via MaixVision on MaixVision and then run the program on the development board.
+
+## What Python Basics are Needed to Use MaixPy?
+
+* Basic concepts of Python.
+* Basic concepts of object-oriented programming.
+* Basic syntax of Python, including:
+ * Tab indentation alignment syntax.
+ * Variables, functions, classes, objects, comments, etc.
+ * Control statements such as if, for, while, etc.
+ * Modules and importing modules.
+ * Basic data types such as int, float, str, list, dict, tuple, etc.
+ * Difference between bytes and str, and conversion.
+ * Exception handling, try-except.
+ * Common built-in functions like print, open, len, range, etc.
+ * Common built-in modules like os, sys, time, random, math, etc.
+
+Mastering the above foundational knowledge will enable you to smoothly program with MaixPy. With the help of subsequent tutorials and examples, if unsure, you can refer to search engines, official documentation, or ask ChatGPT to successfully complete your development tasks.
+
+## For Developers Experienced in Another Object-Oriented Programming Language
+
+If you are already proficient in an object-oriented language like C++/Java/C#, you simply need to quickly review Python syntax before starting to use it.
+
+You can refer to resources like [Runoob Tutorial](https://www.runoob.com/python3/python3-tutorial.html) or the [Python Official Tutorial](https://docs.python.org/3/tutorial/index.html).
+
+Alternatively, you can explore individual developers' blogs, such as [Wow! It's Python](https://neucrack.com/p/59).
+
+## For Developers with C Language Experience but No Object-Oriented Programming Experience
+
+If you only know C and lack understanding of object-oriented concepts, you can start by learning about object-oriented programming concepts before diving into Python. It's relatively quick and you can search for video tutorials for entry-level guidance.
+
+After following introductory video tutorials, you can then refer to documentation tutorials such as [Runoob Tutorial](https://www.runoob.com/python3/python3-tutorial.html) or the [Python Official Tutorial](https://docs.python.org/3/tutorial/index.html) to get started!
+
+Once you have acquired the basic knowledge, you can start using MaixPy for programming based on the documentation and examples.
+
+## For Programming Beginners
+
+If you have never dealt with programming before, you will need to start learning Python from scratch. Python is also quite suitable as an introductory language. You can search for video tutorials for specific guidance.
+
+After mastering the basic syntax, you will be able to use MaixPy for programming by following examples provided.
diff --git a/docs/doc/en/basic/python_pkgs.md b/docs/doc/en/basic/python_pkgs.md
new file mode 100644
index 00000000..b94361ff
--- /dev/null
+++ b/docs/doc/en/basic/python_pkgs.md
@@ -0,0 +1,31 @@
+---
+title: Add extra Python packages.
+---
+
+## Introduction
+
+MaixPy is based on the Python language and provides a wide range of functionalities and APIs for embedded application development. In addition to this, you can also use other Python packages to extend its functionality.
+
+## Installing Additional Python Packages
+
+> Please note that not all Python packages are supported. Generally, only pure Python packages are supported, not C extension packages. C extension packages may require you to manually cross-compile them on a computer (which is quite complex and won't be covered here).
+
+### Method 1: Installing Using Python Code
+
+You can install the package you need in MaixVision using Python code, for example:
+
+```python
+import os
+os.system("pip install package_name")
+```
+
+To update a package, you can use:
+
+```python
+import os
+os.system("pip install --upgrade package_name")
+```
+
+### Method 2: Installing Using the Terminal and pip Command
+
+Follow the terminal usage method introduced in [Linux Basics](./linux_basic.md) and use `pip install package_name` to install the package you need.
diff --git a/docs/doc/en/faq.md b/docs/doc/en/faq.md
new file mode 100644
index 00000000..a05632f3
--- /dev/null
+++ b/docs/doc/en/faq.md
@@ -0,0 +1,47 @@
+---
+title: MaixPy FAQ (Frequently Asked Questions)
+---
+
+This page lists common questions and solutions related to MaixPy. If you encounter any issues, please search for answers here first.
+If you cannot find an answer on this page, you can post your question with detailed steps on the [MaixHub Discussion Forum](https://maixhub.com/discussion).
+
+## MaixVision cannot find the device?
+
+First, confirm whether the connection method is WiFi or USB cable.
+**WiFi**:
+* Ensure that WiFi is correctly connected and has obtained an IP address. You can view the `ip` in `Settings -> Device Info` or `Settings -> WiFi`.
+
+**USB Cable**:
+* Ensure that the device is connected to the computer via a Type-C data cable, and the device is powered on and has entered the function selection interface.
+* Ensure that the device driver is installed:
+ * On Windows, check if there is a USB virtual network adapter device in `Device Manager`. If there is an exclamation mark, it means the driver is not installed properly. Follow the instructions in [Quick Start](./README.md) to install the driver.
+ * On Linux, you can check if there is a `usb0` device by running `ifconfig` or `ip addr`, or check all USB devices with `lsusb`. Linux already includes the driver, so if the device is not recognized, check the hardware connection, ensure the device system is up-to-date, and ensure the device has booted up properly.
+ * On macOS, follow the same steps as Linux.
+* Additionally, check the quality of the USB cable and try using a high-quality cable.
+* Additionally, check the quality of the computer's USB port. For example, some small form factor PCs have poor EMI design on their USB ports, and connecting a good quality USB hub may allow the device to work. You can also try a different USB port or a different computer.
+
+## MaixVision camera example shows choppy video
+
+The default GC4653 camera has a maximum frame rate of 30 frames per second (FPS). Under normal circumstances, the MaixVision display should not appear choppy to the naked eye. If choppiness occurs, first consider transmission issues:
+* Check the network connection quality, such as WiFi.
+* If using a USB connection, check the USB cable quality, computer USB port quality, and try using a different computer, USB port, or USB cable for comparison.
+
+## What is the difference between MaixPy v4 and v1/v3?
+
+* MaixPy v4 uses the Python language and is the culmination of the experiences from v1 and v3, offering better supporting software and ecosystem, more features, simpler usage, and more comprehensive documentation. While the hardware has significant improvements, the pricing is even more affordable compared to the other two versions. Additionally, it provides compatibility with the K210 user experience and API, making it easier for users to migrate quickly from v1 to v4.
+* v1 used the Micropython language and had many limitations, such as limited third-party library support. Additionally, due to the hardware performance limitations of the Maix-I (K210), there was not enough memory, limited AI model support, and lack of hardware acceleration for many codecs.
+* v3 also used the Python language and was based on the Maix-II-Dock (v831) hardware. However, the hardware had limited AI model support, and the Allwinner ecosystem was not open enough, with an incomplete API. This version was only intended for use with the Maix-II-Dock (v831) and will not receive further updates.
+
+## Does MaixPy currently only support MaixCAM, or can it work with other boards using the same chipset?
+
+MaixPy currently only supports the MaixCAM series of boards. Other boards using the same chipset, including Sipeed's boards like the LicheeRV-Nano, are not supported. It is strongly recommended not to attempt using MaixPy with other boards, as it may result in device damage (such as smoke or screen burn), for which you will be solely responsible.
+
+In the future, Sipeed's Maix series of products will continue to be supported by MaixPy. If you have any needs that cannot be met by MaixCAM, you can post your requirements on the [MaixHub Discussion Forum](https://maixhub.com/discussion) or send an email to support@sipeed.com.
+
+## Can I use a camera or screen other than the officially bundled ones?
+
+It is not recommended to use cameras or screens other than the officially bundled ones, unless you have sufficient software and hardware knowledge and experience. Otherwise, it may result in device damage.
+
+The officially bundled accessories have been fine-tuned for both software and hardware, ensuring the best performance and allowing for out-of-the-box usage. Other accessories may have different interfaces, drivers, and software, requiring you to calibrate them yourself, which is an extremely complex process.
+
+However, if you are an expert, we welcome you to submit a pull request!
diff --git a/docs/doc/en/modules/acc.md b/docs/doc/en/modules/acc.md
new file mode 100644
index 00000000..e69de29b
diff --git a/docs/doc/en/modules/thermal_cam.md b/docs/doc/en/modules/thermal_cam.md
new file mode 100644
index 00000000..e69de29b
diff --git a/docs/doc/en/modules/tof.md b/docs/doc/en/modules/tof.md
new file mode 100644
index 00000000..e69de29b
diff --git a/docs/doc/en/peripheral/gpio.md b/docs/doc/en/peripheral/gpio.md
new file mode 100644
index 00000000..e69de29b
diff --git a/docs/doc/en/peripheral/i2c.md b/docs/doc/en/peripheral/i2c.md
new file mode 100644
index 00000000..e69de29b
diff --git a/docs/doc/en/peripheral/pwm.md b/docs/doc/en/peripheral/pwm.md
new file mode 100644
index 00000000..e69de29b
diff --git a/docs/doc/en/peripheral/spi.md b/docs/doc/en/peripheral/spi.md
new file mode 100644
index 00000000..e69de29b
diff --git a/docs/doc/en/peripheral/uart.md b/docs/doc/en/peripheral/uart.md
new file mode 100644
index 00000000..e69de29b
diff --git a/docs/doc/en/peripheral/wdt.md b/docs/doc/en/peripheral/wdt.md
new file mode 100644
index 00000000..e69de29b
diff --git a/docs/doc/en/pro/compile_os.md b/docs/doc/en/pro/compile_os.md
new file mode 100644
index 00000000..e69de29b
diff --git a/docs/doc/en/sidebar.yaml b/docs/doc/en/sidebar.yaml
index 79dbab37..dabc8295 100644
--- a/docs/doc/en/sidebar.yaml
+++ b/docs/doc/en/sidebar.yaml
@@ -1,83 +1,127 @@
items:
- file: README.md
- label: Brief
-- collapsed: false
- items:
- - collapsed: false
- file: maix/err.md
- label: err
- - collapsed: false
- file: maix/tensor.md
- label: tensor
- - collapsed: false
- file: maix/image.md
- label: image
- - collapsed: false
- file: maix/camera.md
- label: camera
- - collapsed: false
- file: maix/display.md
- label: display
- - collapsed: false
- file: maix/comm.md
- label: comm
- - collapsed: false
- file: maix/thread.md
- label: thread
- - collapsed: false
- file: maix/fs.md
- label: fs
- - collapsed: false
- file: maix/time.md
- label: time
- - collapsed: false
- file: maix/i18n.md
- label: i18n
- - collapsed: false
- file: maix/protocol.md
- label: protocol
- - collapsed: false
- file: maix/example.md
- label: example
- - collapsed: false
- file: maix/app.md
- label: app
- - collapsed: false
- file: maix/nn.md
- items:
- - collapsed: false
- file: maix/nn/F.md
- label: F
- label: nn
- - collapsed: false
- file: maix/peripheral.md
- items:
- - collapsed: false
- file: maix/peripheral/timer.md
- label: timer
- - collapsed: false
- file: maix/peripheral/wdt.md
- label: wdt
- - collapsed: false
- file: maix/peripheral/pwm.md
- label: pwm
- - collapsed: false
- file: maix/peripheral/gpio.md
- label: gpio
- - collapsed: false
- file: maix/peripheral/spi.md
- label: spi
- - collapsed: false
- file: maix/peripheral/uart.md
- label: uart
- - collapsed: false
- file: maix/peripheral/i2c.md
- label: i2c
- - collapsed: false
- file: maix/peripheral/adc.md
- label: adc
- label: peripheral
- - collapsed: false
- file: maix/touchscreen.md
- label: touchscreen
- label: maix
+ label: Quick Start
+- file: faq.md
+ label: FAQ
+
+- label: Base
+- file: basic/os.md
+ label: Burning system
+- file: basic/app_usage.md
+ label: App uses
+- file: basic/maixpy_upgrade.md
+ label: Update MaixPy
+- file: basic/maixvision.md
+ label: MaixVision uses
+- file: basic/python.md
+ label: Python syntax
+- file: basic/linux_basic.md
+ label: Linux fundamentals
+- file: basic/python_pkgs.md
+ label: Add python packages
+- file: basic/app.md
+ label: Apps development
+
+- label: Basic images and algorithms
+- file: vision/display.md
+ label: Screen uses
+- file: vision/camera.md
+ label: Camera uses
+- file: vision/image_ops.md
+ label: Image control
+- file: vision/find_blobs.md
+ label: Finding color blocks
+- file: vision/qrcode.md
+ label: QRcode identity
+- file: vision/apriltag.md
+ label: AprilTag identity
+
+- label: AI Vision
+- file: vision/ai.md
+ label: AI vision knowledge
+- file: vision/classify.md
+ label: AI object classify
+- file: vision/yolov5.md
+ label: YOLOv5 object detect
+- file: vision/face_recognition.md
+ label: Face detect
+- file: vision/body_key_points.md
+ label: Human critical point detection
+- file: vision/self_learn_classifier.md
+ label: Self-learning classifier
+- file: vision/self_learn_detector.md
+ label: Self-learning detector
+- file: vision/object_track.md
+ label: Object tracking and counting
+- file: vision/ocr.md
+ label: OCR
+- file: vision/maixhub_train.md
+ label: MaixHub online AI training
+- file: vision/custmize_model.md
+ label: Custom model
+
+
+- label: AI audio
+- file: audio/record.md
+ label: Audio record
+- file: audio/play.md
+ label: Play audio
+- file: audio/classifier.md
+ label: AI voice classifier
+- file: audio/keyword.md
+ label: Keyword recognize
+- file: audio/recognize.md
+ label: Real-time voice recognize
+- file: audio/synthesis.md
+ label: Speech synthesis
+
+- label: Video
+- file: video/record.md
+ label: Video record
+- file: video/play.md
+ label: Play video
+- file: video/jpeg_streaming.md
+ label: JPEG stream
+- file: video/rtsp.md
+ label: RTSP stream
+
+
+- label: On-chip peripherals
+- file: peripheral/gpio.md
+ label: GPIO
+- file: peripheral/uart.md
+ label: UART
+- file: peripheral/i2c.md
+ label: I2C
+- file: peripheral/pwm.md
+ label: PWM
+- file: peripheral/spi.md
+ label: SPI
+- file: peripheral/wdt.md
+ label: WDT watchdog
+
+- label: Off-chip modules
+- file: modules/acc.md
+ label: Accelerometer
+- file: modules/temp_hum.md
+ label: Temperature and humidity
+- file: modules/tof.md
+ label: TOF
+- file: modules/thermal_cam.md
+ label: Thermal imaging
+
+- label: Advanced
+- file: source_code/contribute.md
+ label: Contribute
+- file: source_code/build.md
+ label: Build source code
+- file: source_code/faq.md
+ label: MaixPy Source FAQ
+- file: source_code/add_c_module.md
+ label: Write in C/C++
+- file: source_code/maixcdk.md
+ label: MaixCDK develop
+- file: pro/compile_os.md
+ label: Build firmware
+
+
diff --git a/docs/doc/en/source_code/add_c_module.md b/docs/doc/en/source_code/add_c_module.md
new file mode 100644
index 00000000..31495320
--- /dev/null
+++ b/docs/doc/en/source_code/add_c_module.md
@@ -0,0 +1,16 @@
+---
+title: Adding a C/C++ Module to MaixPy
+---
+
+## Introduction
+
+Sometimes we need to execute a function efficiently, and the speed of Python cannot meet the requirements. In such cases, we can use C/C++ or other compiled languages to implement the function.
+
+## General Function Encapsulation
+
+If the function you want to encapsulate does not depend on other functionalities of MaixPy, you can directly use the general method of adding C/C++ modules with Python, such as ffi, ctype, etc. You can search for relevant methods online.
+> Welcome to contribute methods via PR
+
+## If Your Module Needs to Depend on Other Basic APIs of MaixPy
+
+You need to learn how to compile and use [MaixCDK](https://github.com/sipeed/MaixCDK) first, because MaixPy is generated from MaixCDK APIs. Some functionalities in MaixPy are also available in MaixCDK, and then... TODO
diff --git a/docs/doc/en/source_code/build.md b/docs/doc/en/source_code/build.md
new file mode 100644
index 00000000..4cb23aa9
--- /dev/null
+++ b/docs/doc/en/source_code/build.md
@@ -0,0 +1,71 @@
+---
+title: MaixPy develop source code guide
+---
+
+## Get source code
+
+```shell
+git clone https://github.com/sipeed/MaixPy
+cd MaixPy
+```
+
+## Build and pack to wheel
+
+
+```shell
+python setup.py bdist_wheel maixcam
+```
+
+`maixcam` Can be replaced with other board config, see [setup.py]([./configs](https://github.com/sipeed/MaixPy/blob/main/setup.py)) 's `platform_names` variable.
+
+
+After build success, you will find wheel file in `dist` directory, use `pip install -U MaixPy****.wheel` on your device to install or upgrade.
+
+> `python setup.py bdist_wheel maixcam --skip-build` will not execute build command and only pack wheel, so you can use `maixcdk menuconfig` and `maixcdk build` first to customize building.
+
+## Build manually
+
+```shell
+maixcdk build
+```
+
+## Run test after modify source code
+
+* First, build source code by
+```shell
+maixcdk build
+```
+
+* If build for PC self(platform `linux`):
+Then execute `./run.sh your_test_file_name.py` to run python script.
+```shell
+cd test
+./run.sh examples/hello_maix.py
+```
+
+* If cross compile for borad:
+ * The fastest way is copy `maix` dir to device's `/usr/lib/python3.11/site-packages/` directory, then run script on device.
+ * Or pack wheel and install on device by `pip install -U MaixPy****.wheel`, then run script on device.
+
+## Preview documentation locally
+
+Documentation in [docs](https://github.com/sipeed/MaixPy/tree/main/docs) directory, use `Markdown` format, you can use [teedoc](https://github.com/teedoc/teedoc) to generate web version documentation.
+
+And the API doc is generated when build MaixPy firmware, **if you don't build MaixPy, the API doc will be empty**.
+
+```shell
+pip install teedoc -U
+cd docs
+teedoc install -i https://pypi.tuna.tsinghua.edu.cn/simple
+teedoc serve
+```
+
+Then visit `http://127.0.0.1:2333` to preview documentation on web browser.
+
+
+## For developers who want to contribute
+
+See [MaixPy develop source code guide](./contribute.md)
+
+If you encounter any problems when use source code, please refer to [FAQ](./faq.md) first.
+
diff --git a/docs/doc/en/source_code/contribute.md b/docs/doc/en/source_code/contribute.md
new file mode 100644
index 00000000..e304ca09
--- /dev/null
+++ b/docs/doc/en/source_code/contribute.md
@@ -0,0 +1,31 @@
+---
+title: Contributing to MaixPy Documentation Modification and Code Contribution
+---
+
+## Contributing to MaixPy Documentation Modification
+
+* Click the "Edit this page" button in the top right corner of the documentation you want to modify to enter the GitHub source documentation page.
+* Make sure you are logged in to your GitHub account.
+* Click the pencil icon in the top right corner of the GitHub preview documentation page to modify the content.
+* GitHub will prompt you to fork a copy to your own repository. Click the "Fork" button.
+> This step forks the MaixPy source code repository to your own account, allowing you to freely modify it.
+* Modify the documentation content, then fill in the modification description at the bottom of the page, and click "Commit changes".
+* Then find the "Pull requests" button in your repository and click to create a new Pull request.
+* In the pop-up page, fill in the modification description and click "Submit Pull request". Others and administrators can then see your modifications on the [Pull requests page](https://github.com/sipeed/MaixPy/pulls).
+* Wait for the administrator to review and approve, and your modifications will be merged into the MaixPy source code repository.
+* After the merge is successful, the documentation will be automatically updated to the [MaixPy official documentation](https://wiki.sipeed.com/maixpy).
+> Due to CDN caching, it may take some time to see the update. For urgent updates, you can contact the administrator for manual refreshing.
+> You can also visit [en.wiki.sipeed.com/maixpy](https://en.wiki.sipeed.com/maixpy) to view the GitHub Pages service version, which is updated in real-time without caching.
+
+## Contributing to MaixPy Code Contribution
+
+* Visit the MaixPy code repository address: [github.com/sipeed/MaixPy](https://github.com/sipeed/MaixPy)
+* Before modifying the code, it is best to create an [issue](https://github.com/sipeed/MaixPy/issues) first, describing the content you want to modify to let others know your ideas and plans, so that everyone can participate in the modification discussion and avoid duplication of effort.
+* Click the "Fork" button in the top right corner to fork a copy of the MaixPy code repository to your own account.
+* Then clone a copy of the code from your account to your local machine.
+* After modifying the code, commit it to your repository.
+* Then find the "Pull requests" button in your repository and click to create a new Pull request.
+* In the pop-up page, fill in the modification description and click "Submit Pull request". Others and administrators can then see your modifications on the [Pull requests page](https://github.com/sipeed/MaixPy/pulls).
+* Wait for the administrator to review and approve, and your modifications will be merged into the MaixPy source code repository.
+
+> Note that most of the MaixPy code is automatically generated from [MaixCDK](https://github.com/sipeed/MaixCDK), so if you modify the C/C++ source code, you may need to modify this repository first.
diff --git a/docs/doc/en/source_code/faq.md b/docs/doc/en/source_code/faq.md
new file mode 100644
index 00000000..c69bedf5
--- /dev/null
+++ b/docs/doc/en/source_code/faq.md
@@ -0,0 +1,19 @@
+MaixPy Source Code FAQ
+===
+
+## subprocess.CalledProcessError: Command '('lsb_release', '-a')' returned non-zero exit status 1.
+
+Edit `/usr/bin/lsb_release` as root, change the first line from `#!/usr/bin/python3` to `python3`.
+
+Then compile again and it should work.
+
+## ImportError: arg(): could not convert default argument 'format: maix::image::Format' in method '.__init__' into a Python object (type not registered yet?)
+
+Pybind11 need you to register `image::Format` first, then you can use it in `camera::Camera`, to we must fist define `image::Format` in generated `build/maixpy_wrapper.cpp` source file.
+
+To achieve this, edit `components/maix/headers_priority.txt`, the depended on should be placed before the one use it.
+e.g.
+```
+maix_image.hpp
+maix_camera.hpp
+```
diff --git a/docs/doc/en/source_code/maixcdk.md b/docs/doc/en/source_code/maixcdk.md
new file mode 100644
index 00000000..eedf49ec
--- /dev/null
+++ b/docs/doc/en/source_code/maixcdk.md
@@ -0,0 +1,14 @@
+---
+title: Switching to MaixCDK for C/C++ Application Development
+---
+
+In addition to developing with MaixPy, there is also a corresponding C/C++ SDK available, called [MaixCDK](https://github.com/sipeed/MaixCDK).
+
+## Introduction to MaixCDK
+
+MaixPy is built on top of MaixCDK, and most of MaixPy's APIs are automatically generated based on MaixCDK's APIs. Therefore, any functionality available in MaixPy is also included in MaixCDK.
+If you are more familiar with C/C++ programming or require higher performance, you can use MaixCDK for development.
+
+## Using MaixCDK
+
+The MaixCDK code repository is located at [github.com/sipeed/MaixCDK](https://github.com/sipeed/MaixCDK), where you can find the MaixCDK code and documentation.
diff --git a/docs/doc/en/video/jpeg_streaming.md b/docs/doc/en/video/jpeg_streaming.md
new file mode 100644
index 00000000..774ceb86
--- /dev/null
+++ b/docs/doc/en/video/jpeg_streaming.md
@@ -0,0 +1,44 @@
+---
+title: MaixPy Video Stream JPEG Streaming / Sending Images to Server
+update:
+ - date: 2024-04-03
+ author: neucrack
+ version: 1.0.0
+ content: Initial document
+
+---
+
+## Introduction
+
+Sometimes it is necessary to send images to a server or push video from a camera to a server. Here, we provide the simplest method, which is to compress images into `JPEG` format and send them one by one to the server.
+
+Note, this is a very basic method and not a formal way to stream video. It is also not suitable for high-resolution, high-frame-rate video streams, as it involves sending images one by one. For more efficient video streaming, please use the `RTSP` or `RTMP` modules discussed later.
+
+## How to Use
+
+```python
+from maix import image
+import requests
+
+# create image
+img = image.Image(640, 480, image.Format.FMT_RGB)
+# draw something
+img.draw_rect(60, 60, 80, 80, image.Color.from_rgb(255, 0, 0))
+
+# convert to jpeg
+jpeg = img.to_format(image.Format.FMT_JPEG) # image.Format.FMT_PNG
+# get jpeg bytes
+jpeg_bytes = jpeg.to_bytes()
+
+# faster way, borrow memory from jpeg object,
+# but be careful, when jpeg object is deleted, jpeg_bytes object MUST NOT be used, or program will crash
+# jpeg_bytes = jpeg.to_bytes(copy = False)
+
+# send image binary bytes to server
+url = "http://192.168.0.123:8080/upload"
+res = requests.post(url, data=jpeg_bytes)
+print(res.status_code)
+print(res.text)
+```
+
+As you can see, the image is first converted into `JPEG` format, and then the binary data of the `JPEG` image is sent to the server via `TCP`.
diff --git a/docs/doc/en/vision/ai.md b/docs/doc/en/vision/ai.md
new file mode 100644
index 00000000..319b1331
--- /dev/null
+++ b/docs/doc/en/vision/ai.md
@@ -0,0 +1,24 @@
+---
+title: Basic Knowledge of AI Vision
+update:
+ - date: 2024-04-03
+ author: neucrack
+ version: 1.0.0
+ content: Initial documentation
+---
+
+## Introduction
+
+If you don't have an AI background, you can first read [What is Artificial Intelligence (AI) and Machine Learning](https://wiki.sipeed.com/ai/en/basic/what_is_ai.html) to understand the basic concepts of AI before learning about AI.
+
+Then, the visual AI we use is generally based on the `deep neural network learning` method. If you are interested, you can check out [Deep Neural Network (DNN) Basics](https://wiki.sipeed.com/ai/en/basic/dnn_basic.html).
+
+## Using Visual AI in MaixPy
+
+Using visual AI in MaixPy is very simple. By default, commonly used AI models are provided, and you can use them directly without having to train the models yourself. You can find the `maixcam` models in the [MaixHub Model Library](https://maixhub.com/model/zoo).
+
+Additionally, the underlying APIs have been well-encapsulated, and you only need to make simple calls to implement them.
+
+If you want to train your own model, you can start with [MaixHub Online Training](https://maixhub.com/model/training/project). On the online platform, you can train models just by clicking, without the need to purchase expensive machines, set up complex development environments, or write code, making it very suitable for beginners and also for experienced users who are too lazy to read code.
+
+Generally, once you have obtained the model file, you can transfer it to the device and call the MaixPy API to use it. The specific calling methods are discussed in the following sections.
diff --git a/docs/doc/en/vision/apriltag.md b/docs/doc/en/vision/apriltag.md
new file mode 100644
index 00000000..ee164986
--- /dev/null
+++ b/docs/doc/en/vision/apriltag.md
@@ -0,0 +1,127 @@
+---
+title: MaixPy Apriltag Recognition
+update:
+ - date: 2024-04-03
+ author: lxowalle
+ version: 1.0.0
+ content: Initial documentation
+---
+
+Before reading this article, make sure you are familiar with how to develop with MaixPy. For more details, please read [MaixVision -- MaixPy Programming + Graphical Block Programming](../basic/maixvision.md).
+
+## Introduction
+
+This article introduces how to use MaixPy to recognize Apriltag labels.
+
+## Using MaixPy to Recognize Apriltag Labels
+
+MaixPy's `maix.image.Image` provides the `find_apriltags` method, which can be used to recognize Apriltag labels.
+
+### How to Recognize Apriltag Labels
+
+A simple example of recognizing Apriltag labels and drawing bounding boxes:
+
+```python
+from maix import image, camera, display
+
+cam = camera.Camera()
+disp = display.Display()
+
+families = image.ApriltagFamilies.TAG36H11
+x_scale = cam.width() / 160
+y_scale = cam.height() / 120
+
+while 1:
+ img = cam.read()
+
+ new_img = img.resize(160, 120)
+ apriltags = new_img.find_apriltags(families = families)
+ for a in apriltags:
+ corners = a.corners()
+
+ for i in range(4):
+ corners[i][0] = int(corners[i][0] * x_scale)
+ corners[i][1] = int(corners[i][1] * y_scale)
+ x = int(a.x() * x_scale)
+ y = int(a.y() * y_scale)
+ w = int(a.w() * x_scale)
+ h = int(a.h() * y_scale)
+
+ for i in range(4):
+ img.draw_line(corners[i][0], corners[i][1], corners[(i + 1) % 4][0], corners[(i + 1) % 4][1], image.COLOR_RED)
+ img.draw_string(x + w, y, "id: " + str(a.id()), image.COLOR_RED)
+ img.draw_string(x + w, y + 15, "family: " + str(a.family()), image.COLOR_RED)
+
+ disp.show(img)
+```
+
+Steps:
+
+1. Import the image, camera, and display modules
+
+ ```python
+ from maix import image, camera, display
+ ```
+
+2. Initialize the camera and display
+
+ ```python
+ cam = camera.Camera()
+ disp = display.Display()
+ ```
+
+3. Get the image from the camera and display it
+
+ ```python
+ while 1:
+ img = cam.read()
+ disp.show(img)
+ ```
+
+4. Call the `find_apriltags` method to recognize Apriltag labels in the camera image
+
+ ```python
+ new_img = img.resize(160, 120)
+ apriltags = new_img.find_apriltags(families = families)
+ ```
+
+ - `img` is the camera image obtained through `cam.read()`
+ - `img.resize(160, 120)` is used to scale down the image to a smaller size, allowing the algorithm to compute faster with a smaller image
+ - `new_img.find_apriltags(families = families)` is used to find Apriltag labels, and the query results are saved in `apriltags` for further processing. The `families` parameter is used to select the Apriltag family, defaulting to `image.ApriltagFamilies.TAG36H11`
+
+5. Process the recognized label results and display them on the screen
+
+ ```python
+ for a in apriltags:
+ # Get position information (and map coordinates to the original image)
+ x = int(a.x() * x_scale)
+ y = int(a.y() * y_scale)
+ w = int(a.w() * x_scale)
+ corners = a.corners()
+ for i in range(4):
+ corners[i][0] = int(corners[i][0] * x_scale)
+ corners[i][1] = int(corners[i][1] * y_scale)
+
+ # Display
+ for i in range(4):
+ img.draw_line(corners[i][0], corners[i][1], corners[(i + 1) % 4][0], corners[(i + 1) % 4][1], image.COLOR_RED)
+ img.draw_string(x + w, y, "id: " + str(a.id()), image.COLOR_RED)
+ img.draw_string(x + w, y + 15, "family: " + str(a.family()), image.COLOR_RED)
+ img.draw_string(x + w, y + 30, "rotation : " + str(180 * a.rotation() // 3.1415), image.COLOR_RED)
+ ```
+
+ - Iterate through the members of `apriltags`, which is the result of scanning Apriltag labels through `img.find_apriltags()`. If no labels are found, the members of `apriltags` will be empty.
+ - `x_scale` and `y_scale` are used to map coordinates. Since `new_img` is a scaled-down image, the coordinates of the Apriltag need to be mapped to be drawn correctly on the original image `img`.
+ - `a.corners()` is used to get the coordinates of the four vertices of the detected label, and `img.draw_line()` uses these four vertex coordinates to draw the shape of the label.
+ - `img.draw_string` is used to display the label content, where `a.x()` and `a.y()` are used to get the x and y coordinates of the top-left corner of the label, `a.id()` is used to get the label ID, `a.family()` is used to get the label family type, and `a.rotation()` is used to get the rotation angle of the label.
+
+### Common Parameter Explanations
+
+Here are explanations for common parameters. If you can't find parameters to implement your application, you may need to consider using other algorithms or extending the required functionality based on the current algorithm's results.
+
+| Parameter | Description | Example |
+| --------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
+| roi | Set the rectangular region for the algorithm to compute. roi=[x, y, w, h], where x and y represent the coordinates of the top-left corner of the rectangle, and w and h represent the width and height of the rectangle. The default is the entire image. | Compute the region with coordinates (50,50) and a width and height of 100:
```img.find_apriltags(roi=[50, 50, 100, 100])``` |
+| families | Apriltag label family type | Scan for labels from the TAG36H11 family:
```img.find_apriltags(families = image.ApriltagFamilies.TAG36H11)``` |
+
+This article introduces common methods. For more API information, please refer to the [image](../../../api/maix/image.md) section of the API documentation.
diff --git a/docs/doc/en/vision/assets/custmize_model1.png b/docs/doc/en/vision/assets/custmize_model1.png
new file mode 100644
index 00000000..d33a9332
Binary files /dev/null and b/docs/doc/en/vision/assets/custmize_model1.png differ
diff --git a/docs/doc/en/vision/assets/custmize_model10.png b/docs/doc/en/vision/assets/custmize_model10.png
new file mode 100644
index 00000000..09261c73
Binary files /dev/null and b/docs/doc/en/vision/assets/custmize_model10.png differ
diff --git a/docs/doc/en/vision/assets/custmize_model11.png b/docs/doc/en/vision/assets/custmize_model11.png
new file mode 100644
index 00000000..6f1b9ad7
Binary files /dev/null and b/docs/doc/en/vision/assets/custmize_model11.png differ
diff --git a/docs/doc/en/vision/assets/custmize_model2.png b/docs/doc/en/vision/assets/custmize_model2.png
new file mode 100644
index 00000000..924accfd
Binary files /dev/null and b/docs/doc/en/vision/assets/custmize_model2.png differ
diff --git a/docs/doc/en/vision/assets/custmize_model3.png b/docs/doc/en/vision/assets/custmize_model3.png
new file mode 100644
index 00000000..5e330316
Binary files /dev/null and b/docs/doc/en/vision/assets/custmize_model3.png differ
diff --git a/docs/doc/en/vision/assets/custmize_model4.png b/docs/doc/en/vision/assets/custmize_model4.png
new file mode 100644
index 00000000..e4832068
Binary files /dev/null and b/docs/doc/en/vision/assets/custmize_model4.png differ
diff --git a/docs/doc/en/vision/assets/custmize_model5.png b/docs/doc/en/vision/assets/custmize_model5.png
new file mode 100644
index 00000000..9486d6d0
Binary files /dev/null and b/docs/doc/en/vision/assets/custmize_model5.png differ
diff --git a/docs/doc/en/vision/assets/custmize_model6.png b/docs/doc/en/vision/assets/custmize_model6.png
new file mode 100644
index 00000000..89330047
Binary files /dev/null and b/docs/doc/en/vision/assets/custmize_model6.png differ
diff --git a/docs/doc/en/vision/assets/custmize_model7.png b/docs/doc/en/vision/assets/custmize_model7.png
new file mode 100644
index 00000000..88fe9d95
Binary files /dev/null and b/docs/doc/en/vision/assets/custmize_model7.png differ
diff --git a/docs/doc/en/vision/assets/custmize_model8.png b/docs/doc/en/vision/assets/custmize_model8.png
new file mode 100644
index 00000000..b41aa891
Binary files /dev/null and b/docs/doc/en/vision/assets/custmize_model8.png differ
diff --git a/docs/doc/en/vision/assets/custmize_model9.png b/docs/doc/en/vision/assets/custmize_model9.png
new file mode 100644
index 00000000..f2b8355e
Binary files /dev/null and b/docs/doc/en/vision/assets/custmize_model9.png differ
diff --git a/docs/doc/en/vision/body_key_points.md b/docs/doc/en/vision/body_key_points.md
new file mode 100644
index 00000000..8e06a0db
--- /dev/null
+++ b/docs/doc/en/vision/body_key_points.md
@@ -0,0 +1,26 @@
+---
+title: MaixPy Human Body Keypoint Detection for Pose Estimation
+---
+
+## Introduction
+
+Using MaixPy, you can easily detect the coordinates of human joint keypoints, which can be used for pose estimation such as sitting posture detection, motion-controlled game input, and more.
+
+## Usage
+
+Using the `maix.nn.BodyKeyPoints` class in MaixPy, you can easily implement this functionality:
+
+```python
+from maix import nn, image, camera, display
+
+detector = nn.BodyKeyPoints(model="/root/models/body_key_points.mud")
+cam = camera.Camera(detector.input_width(), detector.input_height(), detector.input_format())
+dis = display.Display()
+
+while 1:
+ img = cam.read()
+ points = detector.detect(img)
+ for point in points:
+ img.draw_circle(point[0], point[1], 3, color=image.COLOR_RED, thickness=-1)
+ dis.show(img)
+```
diff --git a/docs/doc/en/vision/camera.md b/docs/doc/en/vision/camera.md
new file mode 100644
index 00000000..021bbac0
--- /dev/null
+++ b/docs/doc/en/vision/camera.md
@@ -0,0 +1,59 @@
+---
+title: MaixPy Camera Usage
+update:
+ - date: 2024-04-03
+ author: neucrack
+ version: 1.0.0
+ content: Initial documentation
+---
+
+## Introduction
+
+For the MaixCAM, it comes with a pre-installed GC4653 camera, or an optional OS04A10 camera or global shutter camera, and even an HDMI to MIPI module, all of which can be directly used with simple API calls.
+
+## API Documentation
+
+This article introduces common methods. For more API usage, refer to the documentation of the [maix.camera](/api/maix/camera.html) module.
+
+## Camera Switching
+
+Different cameras use different drivers, and the correct driver needs to be selected in the system.
+
+TODO: How to switch between cameras, such as between GC4653 and OS04A10.
+
+## Getting Images from the Camera
+
+Using MaixPy to easily get images:
+```python
+from maix import camera
+
+cam = camera.Camera(640, 480)
+
+while 1:
+ img = cam.read()
+ print(img)
+```
+
+Here we import the `camera` module from the `maix` module, then create a `Camera` object, specifying the width and height of the image. Then, in a loop, we continuously read the images. The default output is in `RGB` format. If you need `BGR` format or other formats, please refer to the API documentation.
+
+## Skipping Initial Frames
+
+During the brief initialization period of the camera, the image acquisition may not be stable, resulting in strange images. You can use the `skip_frames` function to skip the initial few frames:
+```python
+cam = camera.Camera(640, 480)
+cam.skip_frames(30) # Skip the first 30 frames
+```
+
+## Displaying Images
+
+MaixPy provides the `display` module, which can conveniently display images:
+```python
+from maix import camera, display
+
+cam = camera.Camera(640, 480)
+disp = display.Display()
+
+while 1:
+ img = cam.read()
+ disp.show(img)
+```
diff --git a/docs/doc/en/vision/classify.md b/docs/doc/en/vision/classify.md
new file mode 100644
index 00000000..75ae12e9
--- /dev/null
+++ b/docs/doc/en/vision/classify.md
@@ -0,0 +1,41 @@
+---
+title: Using AI Models for Object Classification in MaixPy
+---
+
+## Object Classification Concept
+
+For example, if there are two images in front of you, one with an apple and the other with an airplane, the task of object classification is to input these two images into an AI model one by one. The model will then output two results, one for apple and one for airplane.
+
+## Using Object Classification in MaixPy
+
+MaixPy provides a pre-trained `1000` classification model based on the `imagenet` dataset, which can be used directly:
+
+```python
+from maix import camera, display, image, nn
+
+classifier = nn.Classifier(model="/root/models/mobilenetv2.mud")
+cam = camera.Camera(classifier.input_width(), classifier.input_height(), classifier.input_format())
+dis = display.Display()
+
+while 1:
+ img = cam.read()
+ res = classifier.classify(img)
+ max_idx, max_prob = res[0]
+ msg = f"{max_prob:5.2f}: {classifier.labels[max_idx]}"
+ img.draw_string(10, 10, msg, image.COLOR_RED)
+ dis.show(img)
+```
+
+Result video:
+
+
+
+Here, the camera captures an image, which is then passed to the `classifier` for recognition. The result is displayed on the screen.
+
+For more API usage, refer to the documentation for the [maix.nn](/api/maix/nn.html) module.
+
+## Training Your Own Classification Model
+
+Please go to [MaixHub](https://maixhub.com) to learn and train classification models. When creating a project, select `Classification Model`.
diff --git a/docs/doc/en/vision/custmize_model.md b/docs/doc/en/vision/custmize_model.md
new file mode 100644
index 00000000..f77df767
--- /dev/null
+++ b/docs/doc/en/vision/custmize_model.md
@@ -0,0 +1,459 @@
+---
+title: MaixPy Custom (Offline Training) AI Model and Running
+update:
+ - date: 2024-4-23
+ version: v1.0
+ author: dragonforward
+ content:
+ Added YOLOv5s deployment
+---
+
+> This post is contributed by the community user dragonforward
+
+> This blog will show you how to deploy your own YOLOv5s model (the author demonstrates a hard hat model) step by step from scratch. The training part refers to the author's previous work, and those who have already trained their models can skip this part, although there are some differences.
+
+**Obtain Custom-Trained YOLOv5s ONNX Model**
+--------------------------------------------
+
+### **Prepare Custom Dataset (The author uses the VOC dataset)**
+
+* `Dataset Directory Structure` is as follows:
+
+```
+└─VOC2028: Custom dataset
+ ├─Annotations Stores the dataset label files in XML format
+ ├─ImageSets Dataset split files
+ │ └─Main
+ ├─JPEGImages Stores the dataset images
+```
+
+* `Split the Dataset`
+
+
+Execute `python3 split_train_val.py` in the `split_train_val.py` file path, and you will get the following directory structure:
+
+```
+└─VOC2028: Custom dataset
+ ├─Annotations Stores the dataset label files in XML format
+ ├─ImageSets Dataset split files
+ │ └─Main test.txt
+ └─test.txt
+ └─train.txt
+ └─val.txt
+ ├─JPEGImages Stores the dataset images
+ ├─split_train_val.py Python file for splitting the dataset
+```
+
+`split_train_val.py file code`:
+
+```python
+# -*- coding: utf-8 -*-
+"""
+Author: dragonforward
+Description: Split into training, validation, and test sets in the ratio of 8:1:1, 8 for training, 1 for validation, and 1 for testing.
+"""
+import os
+import random
+import argparse
+
+parser = argparse.ArgumentParser()
+# Address of the XML files, modify according to your data. XML files are usually stored in Annotations
+parser.add_argument('--xml_path', default='Annotations/', type=str, help='input xml label path')
+# Dataset split, choose the address under your data's ImageSets/Main
+parser.add_argument('--txt_path', default='ImageSets/Main/', type=str, help='output txt label path')
+opt = parser.parse_args()
+
+train_percent = 0.8 # Proportion of the training set
+val_percent = 0.1 # Proportion of the validation set
+test_persent = 0.1 # Proportion of the test set
+
+xmlfilepath = opt.xml_path
+txtsavepath = opt.txt_path
+total_xml = os.listdir(xmlfilepath)
+
+if not os.path.exists(txtsavepath):
+ os.makedirs(txtsavepath)
+
+num = len(total_xml)
+list = list(range(num))
+
+t_train = int(num * train_percent)
+t_val = int(num * val_percent)
+
+train = random.sample(list, t_train)
+num1 = len(train)
+for i in range(num1):
+ list.remove(train[i])
+
+
+val_test = [i for i in list if not i in train]
+val = random.sample(val_test, t_val)
+num2 = len(val)
+for i in range(num2):
+ list.remove(val[i])
+
+
+file_train = open(txtsavepath + '/train.txt', 'w')
+file_val = open(txtsavepath + '/val.txt', 'w')
+file_test = open(txtsavepath + '/test.txt', 'w')
+
+for i in train:
+ name = total_xml[i][:-4] + '\n'
+ file_train.write(name)
+
+for i in val:
+ name = total_xml[i][:-4] + '\n'
+ file_val.write(name)
+
+for i in list:
+ name = total_xml[i][:-4] + '\n'
+ file_test.write(name)
+
+
+file_train.close()
+file_val.close()
+file_test.close()
+```
+
+* `Convert VOC to labels to obtain label files`
+
+
+Directory structure:
+
+```
+└─VOC2028: Custom dataset
+ ├─Annotations Stores the dataset label files in XML format
+ ├─ImageSets Dataset split files
+ │ └─Main
+ ├─JPEGImages Stores the dataset images
+ └─labels YOLOv5 treats this folder as the training annotation folder
+└─voc_label.py
+```
+
+ the `voc_label.py` file code:
+
+```python
+# -*- coding: utf-8 -*-
+import xml.etree.ElementTree as ET
+import os
+
+sets = ['train', 'val', 'test'] # If your Main folder doesn't have test.txt, remove 'test'
+classes = ["hat", "people"] # Change to your own classes, VOC dataset has the following 20 classes
+# classes = ["brickwork", "coil","rebar"] # Change to your own classes, VOC dataset has the following 20 classes
+# classes = ["aeroplane", 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',
+# 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] # class names
+# abs_path = os.getcwd() /root/yolov5/data/voc_label.py
+abs_path = '/root/yolov5/data/'
+
+def convert(size, box):
+ dw = 1. / (size[0])
+ dh = 1. / (size[1])
+ x = (box[0] + box[1]) / 2.0 - 1
+ y = (box[2] + box[3]) / 2.0 - 1
+ w = box[1] - box[0]
+ h = box[3] - box[2]
+ x = x * dw
+ w = w * dw
+ y = y * dh
+ h = h * dh
+ return x, y, w, h
+
+
+def convert_annotation(image_id):
+ in_file = open(abs_path + '/VOC2028/Annotations/%s.xml' % (image_id), encoding='UTF-8')
+ out_file = open(abs_path + '/VOC2028/labels/%s.txt' % (image_id), 'w')
+ tree = ET.parse(in_file)
+ root = tree.getroot()
+ size = root.find('size')
+ w = int(size.find('width').text)
+ h = int(size.find('height').text)
+ for obj in root.iter('object'):
+ difficult = obj.find('difficult').text
+ # difficult = obj.find('Difficult').text
+ cls = obj.find('name').text
+ if cls not in classes or int(difficult) == 1:
+ continue
+ cls_id = classes.index(cls)
+ xmlbox = obj.find('bndbox')
+ b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),
+ float(xmlbox.find('ymax').text))
+ b1, b2, b3, b4 = b
+ # Bounding box correction
+ if b2 > w:
+ b2 = w
+ if b4 > h:
+ b4 = h
+ b = (b1, b2, b3, b4)
+ bb = convert((w, h), b)
+ out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')
+
+
+for image_set in sets:
+ if not os.path.exists(abs_path + '/VOC2028/labels/'):
+ os.makedirs(abs_path + '/VOC2028/labels/')
+
+ image_ids = open(abs_path + '/VOC2028/ImageSets/Main/%s.txt' % (image_set)).read().strip().split()
+ list_file = open(abs_path + '/VOC2028/%s.txt' % (image_set), 'w')
+ for image_id in image_ids:
+ list_file.write(abs_path + '/VOC2028/JPEGImages/%s.jpg\n' % (image_id)) # Either complete the path yourself, or only writing half may cause an error
+ convert_annotation(image_id)
+ list_file.close()
+```
+
+![custmize_model8](assets/custmize_model8.png)
+
+### **Train the Model**
+
+* Configure the environment
+
+```
+git clone https://github.com/ultralytics/yolov5
+cd yolov5
+pip install -r requirements.txt
+pip install onnx
+```
+
+* Download pre-trained weights (the author tried both v7.0 and v6.0 pt, and both work)
+
+```
+https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt
+```
+
+![custmize_model11](assets/custmize_model11.png)
+
+* Train the model (the author used the school's cluster for training)
+
+```
+python3 train.py --weights weights/yolov5s.pt --cfg models/yolov5s.yaml --
+data data/safthat.yaml --epochs 150 --batch-size 16 --multi-scale --device 0
+
+```
+
+![custmize_model9](assets/custmize_model9.png)
+
+```
+python3 detect.py --source /root/yolov5/data/images/000000.jpg --weights /root/yolov5/runs/train/exp13/weights/best.pt --conf-thres 0.25
+```
+
+![custmize_model10](assets/custmize_model10.png)
+
+* Export the ONNX model. Since the school server is currently in class, they can allocate me a computer only after their class is over. So I used the local conda environment on my laptop to export it. The reason for using `-imgsz 224 320` is that it is more suitable for the screen. I also tried 640_640, but the camera reported an error, suggesting that it should be 640_480. Then I saw that the Sipeed YOLOv5s was 320*224, so I kept it consistent with theirs.
+
+```
+python export.py --weights yolov5s_hat.pt --include onnx --opset 16 --imgsz 224 320
+```
+
+![custmize_model5](assets/custmize_model5.png)
+
+You can view the model by entering netron.app in the URL, and there are three outputs:
+
+![custmize_model2](assets/custmize_model2.png)
+
+Here are the author's three outputs:
+
+```
+onnx::Shape_329
+onnx::Shape_384
+onnx::Shape_439
+```
+
+## Model Conversion (Key Step)
+
+### **Install Docker Environment (Skip if already installed)**
+
+```
+Install the basic software required for Docker
+sudo apt-get update
+sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
+Add official source
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
+sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
+Install Docker
+sudo apt-get update
+sudo apt-get install docker-ce docker-ce-cli containerd.io
+```
+
+### **Start the Model Quantization Process (!!!)**
+
+### **Preparation**
+
+```
+Download from the following URL
+https://github.com/sophgo/tpu-mlir/releases/tag/v1.7
+tpu-mlir-resource.tar and tpu_mlir-1.7-py3-none-any.whl
+```
+
+![custmize_model3](assets/custmize_model3.png)
+
+The reason for pulling the latest version is that I failed with version 3.1, as the tools are constantly being updated, so it's better to keep up with the latest version. You can see in the image below that I also tried version 3.1.
+
+![custmize_model7](assets/custmize_model7.png)
+
+```
+docker pull sophgo/tpuc_dev:latest
+
+After entering the container, copy the two prepared files to the workspace directory
+
+root@3d517bc7f51f:/workspace/model_yolov5s# cd ..
+root@3d517bc7f51f:/workspace# ls
+model_yolov5s tpu-mlir-resource tpu-mlir-resource.tar tpu_mlir-1.7-py3-none-any.whl
+root@3d517bc7f51f:/workspace#
+
+Choose one of the following two options, I recommend the second one for offline installation
+pip install tpu_mlir[all] or pip install tpu_mlir-*-py3-none-any.whl[all]
+The author chose the second option
+pip install tpu_mlir-1.7-py3-none-any.whl
+And install all its dependencies
+pip install tpu_mlir-1.7-py3-none-any.whl[all]
+Extract
+tar -xvf tpu-mlir-resource.tar
+Rename the folder
+mv regression/ tpu-mlir-resource/
+
+
+mkdir model_yolov5s && cd model_yolov5s
+
+cp -rf ../tpu_mlir_resource/dataset/COCO2017 .
+cp -rf ../tpu_mlir_resource/image .
+
+
+Transfer the previously prepared 100 images, one test image, and the ONNX model to the following location
+root@3d517bc7f51f:/workspace# cd model_yolov5s/
+root@3d517bc7f51f:/workspace/model_yolov5s# ls
+COCO2017 image workspace yolov5n_hat.onnx yolov5s_hat.onnx
+root@3d517bc7f51f:/workspace/model_yolov5s# cd COCO2017/
+root@3d517bc7f51f:/workspace/model_yolov5s/COCO2017# ls
+000000.jpg 000011.jpg 000022.jpg 000032.jpg 000042.jpg 000053.jpg 000066.jpg 000076.jpg 000086.jpg 000096.jpg
+000002.jpg 000012.jpg 000023.jpg 000033.jpg 000043.jpg 000054.jpg 000067.jpg 000077.jpg 000087.jpg 000101.jpg
+000003.jpg 000013.jpg 000024.jpg 000034.jpg 000044.jpg 000055.jpg 000068.jpg 000078.jpg 000088.jpg 000102.jpg
+000004.jpg 000014.jpg 000025.jpg 000035.jpg 000045.jpg 000058.jpg 000069.jpg 000079.jpg 000089.jpg 000103.jpg
+000005.jpg 000015.jpg 000026.jpg 000036.jpg 000046.jpg 000059.jpg 000070.jpg 000080.jpg 000090.jpg 000104.jpg
+000006.jpg 000016.jpg 000027.jpg 000037.jpg 000048.jpg 000061.jpg 000071.jpg 000081.jpg 000091.jpg 000105.jpg
+000007.jpg 000017.jpg 000028.jpg 000038.jpg 000049.jpg 000062.jpg 000072.jpg 000082.jpg 000092.jpg 000106.jpg
+000008.jpg 000019.jpg 000029.jpg 000039.jpg 000050.jpg 000063.jpg 000073.jpg 000083.jpg 000093.jpg 000107.jpg
+000009.jpg 000020.jpg 000030.jpg 000040.jpg 000051.jpg 000064.jpg 000074.jpg 000084.jpg 000094.jpg 000108.jpg
+000010.jpg 000021.jpg 000031.jpg 000041.jpg 000052.jpg 000065.jpg 000075.jpg 000085.jpg 000095.jpg 000109.jpg
+root@3d517bc7f51f:/workspace/model_yolov5s/COCO2017# ls -l | grep "^-" | wc -l
+100
+root@3d517bc7f51f:/workspace/model_yolov5s/COCO2017#
+
+You can use ls -l | grep "^-" | wc -l to check the number of images. The author replaced the 100 helmet images and the test image in the COCO2017 folder.
+
+Go back to model_yolov5s
+root@3d517bc7f51f:/workspace/model_yolov5s/COCO2017# cd ..
+root@3d517bc7f51f:/workspace/model_yolov5s# ls
+COCO2017 image workspace yolov5n_hat.onnx yolov5s_hat.onnx
+root@3d517bc7f51f:/workspace/model_yolov5s#
+
+Next
+mkdir workspace && cd workspace
+Execute the following command to convert ONNX to MLIR (remember to replace output_names with your own)
+model_transform \
+--model_name yolov5s \
+--model_def ../yolov5s_hat.onnx \
+--input_shapes [[1,3,224,320]] \
+--mean 0.0,0.0,0.0 \
+--scale 0.0039216,0.0039216,0.0039216 \
+--keep_aspect_ratio \
+--pixel_format rgb \
+--output_names onnx::Shape_329,onnx::Shape_439,onnx::Shape_384 \
+--test_input ../image/hat.jpg \
+--test_result yolov5s_top_outputs.npz \
+--mlir yolov5s.mlir
+
+Execute the following command to convert MLIR to INT8 model, before converting to INT8 model, you need to run calibration to obtain the calibration table
+run_calibration yolov5s.mlir \
+--dataset ../COCO2017 \
+--input_num 100 \
+-o yolov5s_cali_table
+Then execute the following
+model_deploy \
+--mlir yolov5s.mlir \
+--quantize INT8 \
+--calibration_table yolov5s_cali_table \
+--processor cv181x \
+--test_input yolov5s_in_f32.npz \
+--test_reference yolov5s_top_outputs.npz \
+--tolerance 0.85,0.45 \
+--model yolov5s_cv181x_int8_sym.cvimodel
+
+Finally, you will get the following:
+root@3d517bc7f51f:/workspace/model_yolov5s/workspace# ls
+_weight_map.csv yolov5s_cv181x_int8_sym.cvimodel yolov5s_origin.mlir
+build_flag.json yolov5s_cv181x_int8_sym_final.mlir yolov5s_top_f32_all_origin_weight.npz
+final_opt.onnx yolov5s_cv181x_int8_sym_tensor_info.txt yolov5s_top_f32_all_weight.npz
+yolov5s.mlir yolov5s_cv181x_int8_sym_tpu.mlir yolov5s_top_outputs.npz
+yolov5s_cali_table yolov5s_in_f32.npz yolov5s_tpu_addressed_cv181x_int8_sym_weight.npz
+yolov5s_cv181x_int8_sym yolov5s_opt.onnx.prototxt yolov5s_tpu_addressed_cv181x_int8_sym_weight_fix.npz
+root@3d517bc7f51f:/workspace/model_yolov5s/workspace#
+```
+
+Through the above steps, you can obtain the quantized model that can be deployed to the development board.
+
+Explanation:
+The reason why it's cv181x is because I tried it first and got the following
+
+```
+-- [I] load cvimodel from: /root/models/yolov5n.cvimodel
+cvimodel built for cv180x CANNOT run on platform cv181x
+failed to parse cvimodel
+
+```
+
+## running the model on an actual device:
+
+* The contents of `yolov5s_hat.mud` are as follows:
+
+```
+[basic]
+type = cvimodel
+model = yolov5s_hat_cv181x_int8_sym.cvimodel
+
+[extra]
+model_type = yolov5
+input_type = rgb
+mean = 0, 0, 0
+scale = 0.00392156862745098, 0.00392156862745098, 0.00392156862745098
+anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
+labels = hat,person
+```
+
+Run the code:
+
+```python
+from maix import camera, display, image, nn, app
+
+detector = nn.YOLOv5(model="/root/models/yolov5s_hat.mud")
+cam = camera.Camera(detector.input_width(), detector.input_height(), detector.input_format())
+dis = display.Display()
+print("www")
+print(detector.input_width(), detector.input_height(), detector.input_format())
+
+while not app.need_exit():
+ img = cam.read()
+ objs = detector.detect(img, conf_th=0.5, iou_th=0.45)
+ for obj in objs:
+ img.draw_rect(obj.x, obj.y, obj.w, obj.h, color=image.COLOR_RED)
+ msg = f'{detector.labels[obj.class_id]}: {obj.score:.2f}'
+ img.draw_string(obj.x, obj.y, msg, color=image.COLOR_RED)
+ dis.show(img)
+```
+
+![custmize_model4](assets/custmize_model4.png)
+
+Where 10.84.117.1 is the IP address. Upload the `cvmodel` and `mud` files to the `/root/models/` path.
+
+After packaging, install the application and run it, or you can run it in the IDE.
+
+![custmize_model6](assets/custmize_model6.png)
+
+Video link:
+
+```
+https://www.bilibili.com/video/BV1xz421S7Rx/?spm_id_from=333.999.0.0&vd_source=b1fff0f773136d7d05331087929c7739
+```
+
+**Acknowledgments**
+------
+
+Thanks to `谁说现在是冬天呢` for some insights.
\ No newline at end of file
diff --git a/docs/doc/en/vision/display.md b/docs/doc/en/vision/display.md
new file mode 100644
index 00000000..9bf557e7
--- /dev/null
+++ b/docs/doc/en/vision/display.md
@@ -0,0 +1,96 @@
+---
+title: MaixPy Screen Usage
+update:
+
+ - date: 2024-03-31
+ author: neucrack
+ version: 1.0.0
+ content: Initial document
+---
+## Introduction
+
+MaixPy provides the `display` module, which can display images on the screen, and can also send images to MaixVision for display, facilitating debugging and development.
+
+## API Documentation
+
+This document introduces commonly used methods. For more APIs, please refer to the [display](/api/maix/display.html) section of the API documentation.
+
+## Using the Screen
+
+* Import the `display` module:
+```python
+from maix import display
+```
+
+* Create a `Display` object:
+```python
+disp = display.Display()
+```
+
+* Display an image:
+```python
+disp.show(img)
+```
+
+Here, the `img` object is a `maix.image.Image` object, which can be obtained through the `read` method of the `camera` module, or loaded from an image file in the file system using the `load` method of the `image` module, or created as a blank image using the `Image` class of the `image` module.
+
+For example:
+```python
+from maix import image, display
+
+disp = display.Display()
+img = image.load("/root/dog.jpg")
+disp.show(img)
+```
+Here, you need to transfer the `dog.jpg` file to the `/root` directory on the device first.
+
+Display text:
+```python
+from maix import image, display
+
+disp = display.Display()
+img = image.Image(320, 240)
+img.draw_rectangle(0, 0, disp.width(), disp.height(), color=image.Color.from_rgb(255, 0, 0), thickness=-1)
+img.draw_rectangle(10, 10, 100, 100, color=image.Color.from_rgb(255, 0, 0))
+img.draw_string(10, 10, "Hello MaixPy!", color=image.Color.from_rgb(255, 255, 255))
+disp.show(img)
+```
+
+Read an image from the camera and display it:
+```python
+from maix import camera, display, app
+
+disp = display.Display()
+cam = camera.Camera(320, 240)
+while not app.need_exit():
+ img = cam.read()
+ disp.show(img)
+```
+
+> Here, `while not app.need_exit():` is used to facilitate exiting the loop when the `app.set_exit_flag()` method is called elsewhere.
+
+## Adjusting Backlight Brightness
+
+You can manually adjust the backlight brightness in the system's "Settings" app. If you want to adjust the backlight brightness programmatically, you can use the `set_backlight` method, with the parameter being the brightness percentage, ranging from 0 to 100:
+```python
+disp.set_backlight(50)
+```
+
+Note that when the program exits and returns to the app selection interface, the backlight brightness will automatically revert to the system setting.
+
+## Displaying on MaixVision
+
+When running code in MaixVision, images can be displayed on MaixVision for easier debugging and development.
+
+When calling the `show` method, the image will be automatically compressed and sent to MaixVision for display.
+
+Of course, if you don't have a screen, or to save memory by not initializing the screen, you can also directly call the `send_to_maixvision` method of the `image.Image` object to send the image to MaixVision for display.
+```python
+from maix import image
+
+img = image.Image(320, 240)
+img.draw_rectangle(0, 0, img.width(), img.height(), color=image.Color.from_rgb(255, 0, 0), thickness=-1)
+img.draw_rectangle(10, 10, 100, 100, color=image.Color.from_rgb(255, 0, 0))
+img.draw_string(10, 10, "Hello MaixPy!", color=image.Color.from_rgb(255, 255, 255))
+img.send_to_maixvision()
+```
diff --git a/docs/doc/en/vision/face_recognition.md b/docs/doc/en/vision/face_recognition.md
new file mode 100644
index 00000000..86493763
--- /dev/null
+++ b/docs/doc/en/vision/face_recognition.md
@@ -0,0 +1,53 @@
+---
+title: MaixPy Face Recognition
+---
+
+## Introduction to Face Recognition
+
+Face recognition is the process of identifying the location of faces in the current image and determining who they are.
+In addition to detecting faces, face recognition generally involves a database of known and unknown individuals.
+
+## Recognition Principle
+
+* Use an AI model to detect faces and obtain the coordinates and coordinates of facial features.
+* Use the coordinates of facial features to perform affine transformation on the face in the image, aligning it to a standard face shape, making it easier for the model to extract facial features.
+* Use a feature extraction model to extract facial feature values.
+* Compare the extracted feature values with the recorded facial feature values in the database (calculate the cosine distance between the saved and current facial feature values, and find the smallest distance match in the database. If the distance is smaller than a set threshold, it is considered to be the same person in the database.)
+
+## Using MaixPy
+
+The MaixPy `maix.nn` module provides an API for face recognition, which can be used directly, and the model is also built-in. You can also download it from the [MaixHub Model Zoo](https://maixhub.com/model/zoo) (filter for the corresponding hardware platform, such as maixcam).
+
+Recognition:
+
+```python
+from maix import nn
+
+recognizer = nn.Face_Recognizer(model="/root/models/face_recognizer.mud")
+if os.path.exists("/root/faces.bin"):
+ recognizer.load_faces("/root/faces.bin")
+cam = camera.Camera(recognizer.input_width(), recognizer.input_height(), recognizer.input_format())
+dis = display.Display()
+
+while 1:
+ img = cam.read()
+ faces = recognizer.recognize(img)
+ for obj in faces:
+ img.draw_rect(obj.x, obj.y, obj.w, obj.h, color = image.COLOR_RED)
+ msg = f'{recognizer.labels[obj.class_id]}: {obj.score:.2f}'
+ img.draw_string(obj.x, obj.y, msg, color = image.COLOR_RED)
+ dis.show(img)
+```
+
+When running this code for the first time, you will find that it can detect faces, but it doesn't recognize anyone. We need to enter the add face mode to learn faces first.
+For example, we can learn faces when the user presses a button:
+
+```python
+faces = recognizer.detect_faces(img)
+for face in faces:
+ print(face)
+ # Here we consider the case where there are multiple faces in one image
+ # You can decide whether to add the face to the database based on the coordinates of `face`
+ recognizer.add_face(face)
+recognizer.save_(faces)("/too/faces.bin")
+```
diff --git a/docs/doc/en/vision/find_blobs.md b/docs/doc/en/vision/find_blobs.md
new file mode 100644
index 00000000..17eaac9d
--- /dev/null
+++ b/docs/doc/en/vision/find_blobs.md
@@ -0,0 +1,169 @@
+---
+title: MaixPy Find Blobs
+update:
+ - date: 2024-04-03
+ author: neucrack
+ version: 1.0.0
+ content: Initial documentation
+ - date: 2024-04-03
+ author: lxowalle
+ version: 1.0.1
+ content: Added detailed usage for finding blobs
+---
+Before reading this article, make sure you know how to develop with MaixPy. For details, please read [MaixVision -- MaixPy Programming + Graphical Block Programming](../basic/maixvision.md).
+
+## Introduction
+
+This article will introduce how to use MaixPy to find color blobs and how to use the default application of MaixCam to find color blobs.
+
+In vision applications, finding color blobs is a very common requirement, such as robots finding color blobs, automated production lines finding color blobs, etc., which requires identifying specific color areas in the image and obtaining information such as the position and size of these areas.
+
+## Using MaixPy to Find Blobs
+
+The `maix.image.Image` module in MaixPy provides the `find_blobs` method, which can conveniently find color blobs.
+
+### How to Find Blobs
+
+A simple example to find color blobs and draw bounding boxes:
+
+```python
+from maix import image, camera, display
+
+cam = camera.Camera(320, 240)
+disp = display.Display()
+
+# Select the corresponding configuration based on the color of the blob
+thresholds = [[0, 80, 40, 80, 10, 80]] # red
+# thresholds = [[0, 80, -120, -10, 0, 30]] # green
+# thresholds = [[0, 80, 30, 100, -120, -60]] # blue
+
+while 1:
+ img = cam.read()
+ blobs = img.find_blobs(thresholds, pixels_threshold=500)
+ for blob in blobs:
+ img.draw_rect(blob[0], blob[1], blob[2], blob[3], image.COLOR_GREEN)
+ disp.show(img)
+```
+
+Steps:
+
+1. Import the image, camera, and display modules
+
+ ```python
+ from maix import image, camera, display
+ ```
+
+2. Initialize the camera and display
+
+ ```python
+ cam = camera.Camera(320, 240) # Initialize the camera with an output resolution of 320x240 in RGB format
+ disp = display.Display()
+ ```
+
+3. Get the image from the camera and display it
+
+ ```python
+ while 1:
+ img = cam.read()
+ disp.show(img)
+ ```
+
+4. Call the `find_blobs` method to find color blobs in the camera image and draw them on the screen
+
+ ```python
+ blobs = img.find_blobs(thresholds, pixels_threshold=500)
+ for blob in blobs:
+ img.draw_rect(blob[0], blob[1], blob[2], blob[3], image.COLOR_GREEN)
+ ```
+
+ - `img` is the camera image obtained through `cam.read()`. When initialized with `cam = camera.Camera(320, 240)`, the `img` object is an RGB image with a resolution of 320x240.
+ - `img.find_blobs` is used to find color blobs. `thresholds` is a list of color thresholds, where each element is a color threshold. Multiple thresholds can be passed in to find multiple colors simultaneously. Each color threshold is in the format `[L_MIN, L_MAX, A_MIN, A_MAX, B_MIN, B_MAX]`, where `L`, `A`, and `B` are the three channels in the LAB color space. The `L` channel represents brightness, the `A` channel represents the red-green component, and the `B` channel represents the blue-yellow component. `pixels_threshold` is a pixel count threshold used to filter out unwanted small blobs.
+ - `img.draw_rect` is used to draw bounding boxes around the color blobs. `blob[0]`, `blob[1]`, `blob[1]`, and `blob[1]` represent the x-coordinate of the top-left corner of the blob, the y-coordinate of the top-left corner of the blob, the width of the blob, and the height of the blob, respectively.
+
+### Common Parameter Explanations
+
+Here are explanations of commonly used parameters. If you cannot find parameters that can implement your application, you may need to consider using other algorithms or extending the required functionality based on the current algorithm's results.
+
+| Parameter | Description | Example |
+| ---------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
+| thresholds | Thresholds based on the LAB color space, thresholds=[[l_min, l_max, a_min, a_max, b_min, b_max]], representing:
Brightness range [l_min, l_max]
Green to red component range [a_min, a_max]
Blue to yellow component range [b_min, b_max]
Multiple thresholds can be set simultaneously | Set two thresholds to detect red and green
```img.find_blobs(thresholds=[[0, 80, 40, 80, 10, 80], [0, 80, -120, -10, 0, 30]])```
Red threshold is [0, 80, 40, 80, 10, 80]
Green threshold is [0, 80, -120, -10, 0, 30] |
+| invert | Enable threshold inversion, when enabled, the passed thresholds are inverted. Default is False. | Enable threshold inversion
```img.find_blobs(invert=True)``` |
+| roi | Set the rectangular region for the algorithm to compute, roi=[x, y, w, h], where x and y represent the coordinates of the top-left corner of the rectangle, and w and h represent the width and height of the rectangle, respectively. The default is the entire image. | Compute the region at (50, 50) with a width and height of 100
```img.find_blobs(roi=[50, 50, 100, 100])``` |
+| area_threshold | Filter out blobs with a pixel area smaller than area_threshold, in units of pixels. The default is 10. This parameter can be used to filter out some useless small blobs. | Filter out blobs with an area smaller than 1000
```img.find_blobs(area_threshold=1000)``` |
+| pixels_threshold | Filter out blobs with fewer valid pixels than pixels_threshold. The default is 10. This parameter can be used to filter out some useless small blobs. | Filter out blobs with fewer than 1000 valid pixels
```img.find_blobs(pixels_threshold=1000)``` |
+
+This article introduces commonly used methods. For more APIs, please see the [image](../../../api/maix/image.md) section of the API documentation.
+
+## Using the Find Blobs App
+
+To quickly verify the find blobs functionality, you can first use the find blobs application provided by MaixCam to experience the effect of finding color blobs.
+
+### Usage
+Open the device, select the `Find Blobs` app, then select the color to be recognized from the bottom options or customize a color, and you can recognize the corresponding color. At the same time, the serial port will also output the recognized coordinates and color information.
+
+
+
+### Detailed Explanation
+
+The app interface is as follows:
+
+![](../../../static/image/find_blobs_app.jpg)
+
+#### Using Default Configuration
+
+The find blobs app provides four default configurations: `red`, `green`, `blue`, and `user`. `red`, `green`, and `blue` are used to `find red, green, and blue color blobs`, respectively, while `user` is mainly provided for `user-defined color blob finding`. The method for customizing configurations is described below. For a quick experience, you can switch to the corresponding configuration by `clicking` the `buttons` at the bottom of the interface.
+
+#### Finding Custom Color Blobs
+
+The app provides two ways to find custom color blobs: using adaptive LAB thresholds and manually setting LAB thresholds.
+
+##### 1. Finding Color Blobs with Adaptive LAB Thresholds
+
+Steps:
+
+1. `Click` the `options icon` in the bottom-left corner to enter configuration mode.
+2. Point the `camera` at the `object` you need to `find`, `click` on the `target object` on the screen, and the `left side` will display a `rectangular frame` of the object's color and show the LAB values of that color.
+3. Click on the appearing `rectangular frame`, and the system will `automatically set` the LAB thresholds. At this point, the image will outline the edges of the object.
+
+##### 2. Manually Setting LAB Thresholds to Find Color Blobs
+
+Manual setting allows for more precise targeting of the desired color blobs.
+
+Steps:
+
+1. `Click` the `options icon` in the bottom-left corner to enter configuration mode.
+2. Point the `camera` at the `object` you need to `find`, `click` on the `target object` on the screen, and the `left side` will display a `rectangular frame` of the object's color and show the `LAB values` of that color.
+3. Click on the bottom options `L Min`, `L Max`, `A Min`, `A Max`, `B Min`, `B Max`. After clicking, a slider will appear on the right side to set the value for that option. These values correspond to the minimum and maximum values of the L, A, and B channels in the LAB color format, respectively.
+4. Referring to the `LAB values` of the object color calculated in step 2, adjust `L Min`, `L Max`, `A Min`, `A Max`, `B Min`, `B Max` to appropriate values to identify the corresponding color blobs. For example, if `LAB = (20, 50, 80)`, since `L=20`, to accommodate a certain range, set `L Min=10` and `L Max=30`. Similarly, since `A=50`, set `A Min=40` and `A Max=60`. Since `B=80`, set `B Min=70` and `B Max=90`.
+
+#### Getting Detection Data via Serial Protocol
+
+The find blobs app supports reporting information about detected color blobs via the serial port (default baud rate is 115200).
+
+Since only one report message is sent, we can illustrate the content of the report message with an example.
+
+For instance, if the report message is:
+
+```
+shellCopy code
+
+AA CA AC BB 14 00 00 00 E1 08 EE 00 37 00 15 01 F7 FF 4E 01 19 00 27 01 5A 00 A7 20
+```
+
+- `AA CA AC BB`: Protocol header, content is fixed
+- `14 00 00 00`: Data length, the total length excluding the protocol header and data length
+- `E1`: Flag, used to identify the serial message flag
+- `08`: Command type, for the find blobs app application, this value is fixed at 0x08
+- `EE 00 37 00 15 01 F7 FF 4E 01 19 00 27 01 5A 00`: Coordinates of the four vertices of the found color blob, with each value represented by 2 bytes in little-endian format. `EE 00` and `37 00` represent the first vertex coordinate as (238, 55), `15 01` and `F7 FF` represent the second vertex coordinate as (277, -9), `4E 01` and `19 00` represent the third vertex coordinate as (334, 25), `27 01` and `5A 00` represent the fourth vertex coordinate as (295, 90).
+- `A7 20`: CRC checksum value, used to verify if the frame data has errors during transmission.
+
+## About the LAB Color Space
+
+The LAB color space, like the RGB color space, is a way to represent colors. LAB can represent all colors visible to the human eye. If you need to learn more about LAB, you can search for relevant articles online, which will provide more details. However, for you, it should be sufficient to understand why LAB is advantageous for MaixPy.
+
+Advantages of LAB for MaixPy:
+
+1. The color gamut of the LAB color space is larger than that of RGB, so it can completely replace RGB.
+2. In the LAB color space, since the L channel is the brightness channel, we often set it to a relatively large range (commonly [0, 80]), and when coding, we mainly focus on the A and B channels. This can save a lot of time spent struggling with how to select color thresholds.
+3. The color perception in the LAB color space is more uniform and easier to debug with code. For example, if you only need to find red color blobs, you can fix the values of the L and B channels and only adjust the value of the A channel (in cases where high color accuracy is not required). For RGB channels, you generally need to adjust all three R, G, and B channels simultaneously to find suitable thresholds.
+
diff --git a/docs/doc/en/vision/image_ops.md b/docs/doc/en/vision/image_ops.md
new file mode 100644
index 00000000..01a091a0
--- /dev/null
+++ b/docs/doc/en/vision/image_ops.md
@@ -0,0 +1,340 @@
+---
+title: MaixPy Basic Image Operations
+update:
+
+- date: 2024-04-03
+ author: neucrack
+ version: 1.0.0
+ content: Initial document
+---
+
+## Introduction
+
+Images play a very important role in visual applications. Whether it's a picture or a video, since a video is essentially a series of frames, image processing is the foundation of visual applications.
+
+## API Documentation
+
+This document introduces common methods. For more APIs, refer to the documentation of the maix.image module.
+
+## Image Formats
+
+MaixPy provides a basic image module `image`, where the most important part is the `image.Image` class, which is used for image creation and various basic image operations, as well as image loading and saving.
+
+There are many image formats, and we generally use `image.Format.FMT_RGB888` or `image.Format.FMT_RGBA8888` or `image.Format.FMT_GRAYSCALE` or `image.Format.FMT_BGR888`, etc.
+
+We all know that the three colors `RGB` can synthesize any color, so in most cases, we use `image.Format.FMT_RGB888`, which is sufficient. `RGB888` is `RGB packed` in memory, i.e., the arrangement in memory is:
+`pixel1_red, pixel1_green, pixel1_blue, pixel2_red, pixel2_green, pixel2_blue, ...` arranged in sequence.
+
+## Creating an Image
+
+Creating an image is very simple, you only need to specify the width and height of the image, and the image format:
+
+```
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+print(img)
+print(img.width(), img.height(), img.format())
+```
+
+`320` is the width of the image, `240` is the height of the image, and `image.Format.FMT_RGB888` is the format of the image. The format parameter can be omitted, and the default is `image.Format.FMT_RGB888`.
+
+Here, you can get the width, height, and format of the image using `img.width()`, `img.height()`, and `img.format()`.
+
+## Displaying on the Screen
+
+MaixPy provides the `maix.display.Display` class, which can conveniently display images:
+
+```
+from maix import image, display
+
+disp = display.Display()
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+disp.show(img)
+```
+
+Note that here, since there is no image data, a black image is displayed. See the following sections for how to modify the image.
+
+## Reading Images from the File System
+
+MaixPy provides the `maix.image.load` method, which can read images from the file system:
+
+```
+from maix import image
+
+img = image.load("/root/image.jpg")
+print(img)
+```
+
+Note that here, `/root/image.jpg` has been transferred to the board in advance. You can refer to the previous tutorials for the method.
+It supports `jpg` and `png` image formats.
+
+## Saving Images to the File System
+
+MaixPy's `maix.image.Image` provides the `save` method, which can save images to the file system:
+
+```
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+
+# do something with img
+img.save("/root/image.jpg")
+```
+
+## Drawing Rectangles
+
+`image.Image` provides the `draw_rect` method, which can draw rectangles on the image:
+
+```
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img.draw_rect(10, 10, 100, 100, image.Color.from_rgb(255, 0, 0))
+```
+
+Here, the parameters are: `x`, `y`, `w`, `h`, `color`. `x` and `y` are the coordinates of the top-left corner of the rectangle, `w` and `h` are the width and height of the rectangle, and `color` is the color of the rectangle, which can be created using the `image.Color.from_rgb` method.
+You can specify the line width of the rectangle using `thickness`, which defaults to `1`.
+
+You can also draw a solid rectangle by passing `thickness=-1`:
+
+```
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img.draw_rect(10, 10, 100, 100, (255, 0, 0), thickness=-1)
+```
+
+## Writing Strings
+
+`image.Image` provides the `draw_string` method, which can write text on the image:
+
+```
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img.draw_string(10, 10, "Hello MaixPy", image.Color.from_rgb(255, 0, 0))
+```
+
+Here, the parameters are: `x`, `y`, `text`, `color`. `x` and `y` are the coordinates of the top-left corner of the text, `text` is the text to be written, and `color` is the color of the text, which can be created using the `image.Color.from_rgb` method.
+
+You can also enlarge the font by passing the `scale` parameter:
+
+```
+img.draw_string(10, 10, "Hello MaixPy", image.Color.from_rgb(255, 0, 0), scale=2)
+```
+
+Get the width and height of the font:
+
+```
+w, h = img.string_size("Hello MaixPy", scale=2)
+print(w, h)
+```
+
+**Note** that here, `scale` is the magnification factor, and the default is `1`. It should be consistent with `draw_string`.
+
+## Drawing Lines
+
+`image.Image` provides the `draw_line` method, which can draw lines on the image:
+
+```
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img.draw_line(10, 10, 100, 100, image.Color.from_rgb(255, 0, 0))
+```
+
+Here, the parameters are: `x1`, `y1`, `x2`, `y2`, `color`. `x1` and `y1` are the coordinates of the starting point of the line, `x2` and `y2` are the coordinates of the end point of the line, and `color` is the color of the line, which can be created using the `image.Color.from_rgb` method.
+
+## Drawing Circles
+
+`image.Image` provides the `draw_circle` method, which can draw circles on the image:
+
+```
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img.draw_circle(100, 100, 50, image.Color.from_rgb(255, 0, 0))
+```
+
+Here, the parameters are: `x`, `y`, `r`, `color`. `x` and `y` are the coordinates of the center of the circle, `r` is the radius, and `color` is the color of the circle, which can be created using the `image.Color.from_rgb` method.
+
+## Resizing Images
+
+`image.Image` provides the `resize` method, which can resize images:
+
+```
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img_new = img.resize(160, 120)
+print(img, img_new)
+```
+
+Note that here, the `resize` method returns a new image object, and the original image remains unchanged.
+
+## Cropping Images
+
+`image.Image` provides the `crop` method, which can crop images:
+
+```
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img_new = img.crop(10, 10, 100, 100)
+print(img, img_new)
+```
+
+Note that here, the `crop` method returns a new image object, and the original image remains unchanged.
+
+## Rotating Images
+
+`image.Image` provides the `rotate` method, which can rotate images:
+
+```python
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img_new = img.rotate(90)
+print(img, img_new)
+```
+
+Note that here, the `rotate` method returns a new image object, and the original image remains unchanged.
+
+## Copying Images
+
+`image.Image` provides the `copy` method, which can copy an independent image:
+
+```python
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img_new = img.copy()
+print(img, img_new)
+```
+
+## Affine Transformations
+
+`image.Image` provides the `affine` method, which can perform affine transformations. By providing the coordinates of three or more points in the current image and the corresponding coordinates in the target image, you can automatically perform operations such as rotation, scaling, and translation on the image to transform it into the target image:
+
+```python
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img_new = img.affine([(10, 10), (100, 10), (10, 100)], [(10, 10), (100, 20), (20, 100)])
+print(img, img_new)
+```
+
+For more parameters and usage, please refer to the API documentation.
+
+## Drawing Keypoints
+
+`image.Image` provides the `draw_keypoints` method, which can draw keypoints on the image:
+
+```python
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+
+keypoints = [(10, 10), (100, 10), (10, 100)]
+img.draw_keypoints(keypoints, image.Color.from_rgb(255, 0, 0), size=10, thickness=1, fill=False)
+```
+
+This draws three red keypoints at the coordinates `(10, 10)`, `(100, 10)`, and `(10, 100)`. The size of the keypoints is `10`, the line width is `1`, and they are not filled.
+
+## Drawing Crosses
+
+`image.Image` provides the `draw_cross` method, which can draw crosses on the image:
+
+```python
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img.draw_cross(100, 100, image.Color.from_rgb(255, 0, 0), size=5, thickness=1)
+```
+
+This draws a red cross at the coordinate `(100, 100)`. The extension size of the cross is `5`, so the length of the line segment is `2 * size + thickness`, and the line width is `1`.
+
+## Drawing Arrows
+
+`image.Image` provides the `draw_arrow` method, which can draw arrows on the image:
+
+```python
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img.draw_arrow(10, 10, 100, 100, image.Color.from_rgb(255, 0, 0), thickness=1)
+```
+
+This draws a red arrow starting from the coordinate `(10, 10)`, with the end point at `(100, 100)`, and a line width of `1`.
+
+## Drawing Images
+
+`image.Image` provides the `draw_image` method, which can draw images on the image:
+
+```python
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img2 = image.Image(100, 100, image.Format.FMT_RGB888)
+img2.draw_rect(10, 10, 90, 90, image.Color.from_rgb(255, 0, 0))
+img.draw_image(10, 10, img2)
+```
+
+## Converting Formats
+
+`image.Image` provides the `to_format` method, which can convert image formats:
+
+```python
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img_new = img.to_format(image.Format.FMT_BGR888)
+print(img, img_new)
+img_jpg = img.to_format(image.Format.FMT_JPEG)
+print(img, img_new)
+```
+
+Note that here, the `to_format` method returns a new image object, and the original image remains unchanged.
+
+## Converting between Numpy/OpenCV Formats
+
+You can also convert to a `numpy` array, which can then be used by libraries such as `numpy` and `opencv`:
+
+```python
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+img_np = image.image2cv(img)
+img2 = image.cv2image(img_np)
+print(type(img_np), img_np, img_np.shape)
+print(type(img2), img2)
+```
+
+## Converting between bytes Data
+
+`image.Image` provides the `to_bytes` method, which can convert an image to `bytes` data:
+
+```python
+from maix import image
+
+img = image.Image(320, 240, image.Format.FMT_RGB888)
+data = img.to_bytes()
+print(type(data), len(data), img.data_size())
+
+img2 = image.Image(320, 240, image.Format.FMT_RGB888, data)
+print(img2)
+```
+
+Here, `to_bytes` returns a new `bytes` object, which is independent memory and does not affect the original image.
+The `image.Image` constructor can directly construct an image object from `bytes` data by passing the `data` parameter. Note that the new image is also independent memory and does not affect `data`.
+
+Since memory copying is involved, this method is relatively time-consuming and should not be used frequently.
+
+> If you want to optimize your program without copying (not recommended for casual use, as poorly written code can easily cause crashes), please refer to the API documentation.
+
+## More Basic API Usage
+
+For more API usage, please refer to the documentation of the maix.image module.
+
diff --git a/docs/doc/en/vision/maixhub_train.md b/docs/doc/en/vision/maixhub_train.md
new file mode 100644
index 00000000..57397550
--- /dev/null
+++ b/docs/doc/en/vision/maixhub_train.md
@@ -0,0 +1,52 @@
+---
+title: Using MaixHub to Train AI Models for MaixPy
+update:
+ - date: 2024-04-03
+ author: neucrack
+ version: 1.0.0
+ content: Initial document
+---
+
+## Introduction
+
+MaixHub offers the functionality to train AI models online, directly within a browser. This eliminates the need for expensive hardware, complex development environments, or coding skills, making it highly suitable for beginners as well as experts who prefer not to delve into code.
+
+## Basic Steps to Train a Model Using MaixHub
+
+### Identify the Data and Model Types
+
+To train an AI model, you first need to determine the type of data and model. As of April 2024, MaixHub provides models for image data including `Object Classification Models` and `Object Detection Models`. Object classification models are simpler than object detection models, as the latter require marking the position of objects within images, which can be more cumbersome. Object classification merely requires identifying what is in the image without needing coordinates, making it simpler and recommended for beginners.
+
+### Collect Data
+
+As discussed in AI basics, training a model requires a dataset for the AI to learn from. For image training, you need to create a dataset and upload images to it.
+
+Ensure the device is connected to the internet (WiFi).
+Open the MaixHub app on your device and choose to collect data to take photos and upload them directly to MaixHub. You need to create a dataset on MaixHub first, then click on device upload data, which will display a QR code. Scan this QR code with your device to connect to MaixHub.
+
+It's important to distinguish between training and validation datasets. To ensure the performance during actual operation matches the training results, the validation dataset must be of the same image quality as those taken during actual operation. It's also advisable to use images taken by the device for the training set. If using internet images, restrict them to the training set only, as the closer the dataset is to actual operational conditions, the better.
+
+### Annotate Data
+
+For classification models, images are annotated during upload by selecting the appropriate category for each image.
+
+For object detection models, after uploading, you need to manually annotate each image by marking the coordinates, size, and category of the objects to be recognized.
+This annotation process can also be done offline on your own computer using software like labelimg, then imported into MaixHub using the dataset import feature.
+Utilize shortcuts during annotation to speed up the process. MaixHub will also add more annotation aids and automatic annotation tools in the future (there is already an automatic annotation tool available for videos that you can try).
+
+### Train the Model
+
+Select training parameters, choose the corresponding device platform, select maixcam, and wait in the training queue. You can monitor the training progress in real-time and wait for it to complete.
+
+### Deploy the Model
+
+Once training is complete, you can use the deploy function in the MaixHub app on your device to scan a code and deploy.
+The device will automatically download and run the model, storing it locally for future use.
+
+If you find the recognition results satisfactory, you can share the model to the model library with a single click for others to use.
+
+## How to Use
+
+Please visit [MaixHub](https://maixhub.com) to register an account, then log in. There are video tutorials on the homepage for learning.
+
+Note that if the tutorial uses the M2dock development board, the process is similar for MaixCAM, although the MaixHub application on the device might differ slightly. The overall process is the same, so please apply the knowledge flexibly.
diff --git a/docs/doc/en/vision/object_track.md b/docs/doc/en/vision/object_track.md
new file mode 100644
index 00000000..e69de29b
diff --git a/docs/doc/en/vision/qrcode.md b/docs/doc/en/vision/qrcode.md
new file mode 100644
index 00000000..4e221df6
--- /dev/null
+++ b/docs/doc/en/vision/qrcode.md
@@ -0,0 +1,95 @@
+---
+title: MaixPy QR Code Recognition
+update:
+ - date: 2024-04-03
+ author: lxowalle
+ version: 1.0.0
+ content: Initial document
+---
+
+Before reading this article, make sure you are familiar with how to develop with MaixPy. For details, please read [MaixVision -- MaixPy Programming + Graphical Block Programming](../basic/maixvision.md)
+
+## Introduction
+
+This article explains how to use MaixPy for QR code recognition.
+
+## Using MaixPy to Recognize QR Codes
+
+MaixPy's `maix.image.Image` includes the `find_qrcodes` method for QR code recognition.
+
+### How to Recognize QR Codes
+
+A simple example that recognizes QR codes and draws a bounding box:
+
+```python
+from maix import image, camera, display
+
+cam = camera.Camera(320, 240)
+disp = display.Display()
+
+while True:
+ img = cam.read()
+ qrcodes = img.find_qrcodes()
+ for qr in qrcodes:
+ corners = qr.corners()
+ for i in range(4):
+ img.draw_line(corners[i][0], corners[i][1], corners[(i + 1) % 4][0], corners[(i + 1) % 4][1], image.COLOR_RED)
+ img.draw_string(qr.x(), qr.y() - 15, qr.payload(), image.COLOR_RED)
+ disp.show(img)
+```
+
+Steps:
+
+1. Import the image, camera, and display modules:
+
+ ```python
+ from maix import image, camera, display
+ ```
+
+2. Initialize the camera and display:
+
+ ```python
+ cam = camera.Camera(320, 240) # Initialize the camera with a resolution of 320x240 in RGB format
+ disp = display.Display()
+ ```
+
+3. Capture and display images from the camera:
+
+ ```python
+ while True:
+ img = cam.read()
+ disp.show(img)
+ ```
+
+4. Use the `find_qrcodes` method to detect QR codes in the camera image:
+
+ ```python
+ qrcodes = img.find_qrcodes()
+ ```
+
+ - `img` is the camera image captured by `cam.read()`. When initialized as `cam = camera.Camera(320, 240)`, the `img` object is a 320x240 resolution RGB image.
+ - `img.find_qrcodes` searches for QR codes and saves the results in `qrcodes` for further processing.
+
+5. Process and display the results of QR code recognition on the screen:
+
+ ```python
+ for qr in qrcodes:
+ corners = qr.corners()
+ for i in range(4):
+ img.draw_line(corners[i][0], corners[i][1], corners[(i + 1) % 4][0], corners[(i + 1) % 4][1], image.COLOR_RED)
+ img.draw_string(qr.x(), qr.y() - 15, qr.payload(), image.COLOR_RED)
+ ```
+
+ - `qrcodes` contains the results from `img.find_qrcodes()`. If no QR codes are found, `qrcodes` will be empty.
+ - `qr.corners()` retrieves the coordinates of the four corners of the detected QR code. `img.draw_line()` uses these coordinates to draw the QR code outline.
+ - `img.draw_string` displays information about the QR code content and position. `qr.x()` and `qr.y()` retrieve the x and y coordinates of the QR code's top-left corner, and `qr.payload()` retrieves the content of the QR code.
+
+### Common Parameter Explanation
+
+List common parameters and their explanations. If you cannot find parameters that fit your application, consider whether to use a different algorithm or extend the functionality based on the current algorithm's results.
+
+| Parameter | Description | Example |
+| --------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
+| roi | Sets the rectangular area for the algorithm to compute, where roi=[x, y, w, h], x and y denote the top-left coordinates of the rectangle, and w and h denote the width and height of the rectangle, defaulting to the entire image. | Compute the area with coordinates (50,50) and width and height of 100:
`img.find_qrcodes(roi=[50, 50, 100, 100])` |
+
+This article introduces common methods. For more API details, refer to the [image](../../../api/maix/image.md) section of the API documentation.
diff --git a/docs/doc/en/vision/self_learn_classifier.md b/docs/doc/en/vision/self_learn_classifier.md
new file mode 100644
index 00000000..22e8daf9
--- /dev/null
+++ b/docs/doc/en/vision/self_learn_classifier.md
@@ -0,0 +1,53 @@
+---
+title: MaixPy Self-Learning Classifier
+---
+
+## Introduction to MaixPy Self-Learning Classifier
+
+Typically, to recognize new categories, it is necessary to collect a new dataset and train on a computer, which can be cumbersome and complex. This method eliminates the need for computer-based training, allowing for immediate learning of new objects directly on the device, suitable for less complex scenarios.
+
+For example, if there are a drink bottle and a mobile phone in front of you, take a photo of each to serve as the basis for two categories. Then, collect several photos from different angles of each item, extract their features and save them. During recognition, the image's features are compared with the saved feature values, and the closest match determines the classification.
+
+## Using the Self-Learning Classifier in MaixPy
+
+Steps:
+
+* Collect n classification images.
+* Collect n*m images, m images for each category, order does not matter.
+* Start learning.
+* Recognize images and output results.
+
+Simplified version of the code, for the full version please refer to the complete code in the example.
+
+```python
+from maix import nn, image
+
+classifier = nn.SelfLearnClassifier(model="/root/models/mobilenetv2.mud", feature_layer=None)
+
+img1 = image.load("/root/1.jpg")
+img2 = image.load("/root/2.jpg")
+img3 = image.load("/root/3.jpg")
+sample_1 = image.load("/root/sample_1.jpg")
+sample_2 = image.load("/root/sample_2.jpg")
+sample_3 = image.load("/root/sample_3.jpg")
+sample_4 = image.load("/root/sample_4.jpg")
+sample_5 = image.load("/root/sample_5.jpg")
+sample_6 = image.load("/root/sample_6.jpg")
+
+
+classifier.add_class(img1)
+classifier.add_class(img2)
+classifier.add_class(img3)
+classifier.add_sample(sample_1)
+classifier.add_sample(sample_2)
+classifier.add_sample(sample_3)
+classifier.add_sample(sample_4)
+classifier.add_sample(sample_5)
+classifier.add_sample(sample_6)
+
+classifier.learn()
+
+img = image.load("/root/test.jpg")
+max_idx, max_score = classifier.classify(img)
+print(max_idx, max_score)
+```
diff --git a/docs/doc/en/vision/self_learn_detector.md b/docs/doc/en/vision/self_learn_detector.md
new file mode 100644
index 00000000..9b441080
--- /dev/null
+++ b/docs/doc/en/vision/self_learn_detector.md
@@ -0,0 +1,13 @@
+---
+title: MaixPy Self-Learning Detector
+---
+
+## MaixPy Self-Learning Detector
+
+Similar to the self-learning classifier, the self-learning detector does not require training. Simply taking a few photos of the object to be detected can enable detection, which is very useful in simple detection scenarios.
+Unlike the self-learning classifier, since it is a detector, it will provide the coordinates and size of the object.
+
+## Using the Self-Learning Detector in MaixPy
+
+TODO:
+
diff --git a/docs/doc/en/vision/yolov5.md b/docs/doc/en/vision/yolov5.md
new file mode 100644
index 00000000..5be4c750
--- /dev/null
+++ b/docs/doc/en/vision/yolov5.md
@@ -0,0 +1,43 @@
+---
+title: Using YOLOv5 Model for Object Detection with MaixPy
+---
+
+## Concept of Object Detection
+
+Object detection refers to identifying the position and category of targets in an image or video, such as detecting objects like apples and airplanes in a picture, and marking the position of these objects.
+
+Unlike classification, object detection includes positional information, so the result is usually a rectangular box that frames the location of the object.
+
+## Using Object Detection in MaixPy
+
+MaixPy comes with the `YOLOv5` model by default, which can be used directly:
+
+```python
+from maix import camera, display, image, nn, app
+
+detector = nn.YOLOv5(model="/root/models/yolov5s.mud")
+
+cam = camera.Camera(detector.input_width(), detector.input_height(), detector.input_format())
+dis = display.Display()
+
+while not app.need_exit():
+ img = cam.read()
+ objs = detector.detect(img, conf_th = 0.5, iou_th = 0.45)
+ for obj in objs:
+ img.draw_rect(obj.x, obj.y, obj.w, obj.h, color = image.COLOR_RED)
+ msg = f'{detector.labels[obj.class_id]}: {obj.score:.2f}'
+ img.draw_string(obj.x, obj.y, msg, color = image.COLOR_RED)
+ dis.show(img)
+```
+
+Demonstration video:
+
+