Skip to content

Commit

Permalink
Merge pull request #22 from majianjia/dev
Browse files Browse the repository at this point in the history
update docs
  • Loading branch information
majianjia authored Mar 30, 2019
2 parents 26c02c0 + b82a30a commit 94afeb2
Showing 1 changed file with 12 additions and 10 deletions.
22 changes: 12 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@

# Neural Network on Microcontroller (NNoM)
[![Build Status](https://travis-ci.org/majianjia/nnom.svg?branch=master)](https://travis-ci.org/majianjia/nnom)

NNoM is a higher-level layer-based Neural Network library specifically for microcontrollers.

Expand All @@ -21,12 +22,10 @@ Guides:

Examples:

[RT-Thread-MNIST example (中文)](https://majianjia.github.io/nnom/example_mnist_simple_cn/)

[MNIST-DenseNet example](examples/mnist-densenet)

[RT-Thread-MNIST example (中文)](https://majianjia.github.io/nnom/example_mnist_simple_cn/)

---

## Why NNoM?
The aims of NNoM is to provide a light-weight, user-friendly and flexible interface for fast deploying.
Expand All @@ -40,11 +39,14 @@ Nowadays, neural networks are **wider**, **deeper**, and **denser**.
>[3] Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708).

If you would like to try those more up-to-date, decent and complex structures on MCU
After 2014, the development of Neural Networks are more focus on structure optimising to improve efficiency and performance, which is more important to the small footprint platforms such as MCUs.
However, the available NN libs for MCU are too low-level which make it sooooo difficult to use with these complex strucures.
Therefore, we build the NNoM to help developers to manage the structures, memory and parameters, even with the automatic tools for fast deploying.

Now with NNoM, you are free to play with these more up-to-date, decent and complex structures on MCU.

NNoM can help you to build them with only a few lines of C codes, same as you did with Python in [**Keras**](https://keras.io/)
With [**Keras**](https://keras.io/) and our tools, deploying a model only takes a few line of codes, please do check the [examples](examples/).

---

## Available Operations

Expand All @@ -66,7 +68,7 @@ NNoM can help you to build them with only a few lines of C codes, same as you di

**Activations**

Activation can be used by itself as layer, or can be attached to the previous layer as ["actail"](docs/A%20Temporary%20Guide%20to%20NNoM.md#addictionlly-activation-apis) to reduce memory cost.
Activation can be used by itself as layer, or can be attached to the previous layer as ["actail"](docs/A_Temporary_Guide_to_NNoM.md#addictionlly-activation-apis) to reduce memory cost.

| Actrivation | Status |Layer API|Activation API|Comments|
| ------ |-- |--|--|--|
Expand Down Expand Up @@ -95,20 +97,20 @@ Activation can be used by itself as layer, or can be attached to the previous la
| Substraction | Beta|Sub()||
| Dot | Under Dev. |||

---


## Dependencies

NNoM now use the local pure C backend implementation by default. Thus, there is no special dependency needed.

---


## Optimization
You can select [CMSIS-NN/DSP](https://github.com/ARM-software/CMSIS_5/tree/develop/CMSIS/NN) as the backend for about 5x performance with ARM-Cortex-M4/7/33/35P.

Check [Porting and optimising Guide](docs/Porting_and_Optimisation_Guide.md) for detail.

---


## Contacts
Jianjia Ma
Expand Down

0 comments on commit 94afeb2

Please sign in to comment.